Vous êtes sur la page 1sur 20

Notes on Testing Causality

Jin-Lung Lin
Institute of Economics, Academia Sinica
Department of Economics, National Chengchi University
May ,
Abstract
Tis note reviews the defnition, distribution theory and modeling strategy of test-
ing causality. Starting with the defnition of Granger Causality, we discuss various
issues on testing causality within stationary and nonstationary systems. In addi-
tion, we cover the graphical modeling and spectral domain approaches which are
relatively unfamiliar to economists. We compile a list of Do and Dont Do on causal-
ity testing and review several empirical examples.
Introduction
Testing causality among variables is one of the most important and, yet, one of the
most difcult issues in economics. Te difculty arises fromthe non-experimental
nature of social science. For natural science, researchers can perform experiments
where all other possible causes are kept fxed except for the sole factor under in-
vestigation. By repeating the process for each possible cause, one can identify the
causal structures among factors or variables. Tere are no such luck for social sci-
ence, and economics is no exception. All diferent variables afect the same variable
simultaneously and repeated experiments under control are infeasible (experimen-
tal economics is no solution, at least, not yet).
Two most difcult challenges are :
. Correlation does not imply causality. Distinguishing between these two is by
no means an easy task.
. Tere always exist the possibility of ignored common factors. Te causal
relationship among variables might disappear when the previously ignored
common causes are considered.
While there are no satisfactory answer to these two questions and there might
never be one, philosophers and social scientists have attempted to use graphical
models to address the second issue. As for the frst issue, time series analysts look
for rescue from the unique unidirectional property of time arrow: cause precedes
efect. Based upon this concept, Clive W.J. Granger has proposed a working def-
nition of causality, using the foreseeability as a yardstick which is called Granger
causality. Tis note examines and reviews the key issues in testing causality in eco-
nomics.
In additional to this introduction, Section discusses the defnition of Granger
causality. Testing causality for stationary processes are reviewed in Section and
Section focuses on nonstationary processes. We turn to graphical models in sec-
tion . A to-do and not-to-do list is put together in Section .
Defning Granger causality
. Two assumptions
. Te future cannot cause the past. Te past causes the present or future. (How
about expectation?)

. Acause contains unique information about an efect not available elsewhere.


. Defnition
X
t
is said not to Granger-cause Y
t
if for all h >
F(Y
t+h
|
t
} = F(y
t+h
|
t
X
t
}
where F denotes the conditional distribution and
t
X
t
is all the information in
the universe except series X
t
. In plain words, X
t
is said to not Granger-cause Y
t
if
X cannot help predict future Y.
Remarks:
Te whole distribution F is generally difcult to handle empirically and we
turn to conditional expectation and variance.
It is defned for all h > and not only for h = . Causality at diferent h does
not imply each other. Tey are neither sufcient nor necessary.

t
contains all the information in the universe up to time t that excludes the
potential ignored common factors problem. Te question is: how to mea-
sure
t
in practice? Te unobserved common factors are always a potential
problem for any fnite information set.
Instantaneous causality
t+h
x
t+h
and feedback is difcult to interpret un-
less on has additional structural information.
A refned defnition become as below:
X
t
does not Granger cause Y
t+h
with respect to information J
t
, if
E(Y
t+h
|J
t
, X
t
} = E(Y
t+h
|J
t
}
Remark: Note that causality here is defned as relative to. In other words, no
efort is made to fnd the complete causal path and possible common factors.
. Equivalent defnition
For a l-dimension stationary process, Z
t
, there exists a canonical MA representa-
tion
Z
t
= + (B}u
t
= +

i=

i
u
ti
,

= I
l

A necessary and sufcient condition for variable k not Granger-cause variable j is


that
jk,i
= , for i = , , . If the process is invertible, then
Z
t
= C + A(B}Z
t
+ u
t
= C +

i=
A
i
Z
ti
+ u
t
If there are only two variables, or two-group of variables, j and k, then a necessary
and sufcient condition for variable k not to Granger-cause variable j is that A
jk,i
=
, for i = , , . Te condition is good for all forecast horizon, h.
Note that for a VAR() process with dimension equal or greater than , A
jk,i
=
, for i = , , is sufcient for non-causality at h = but insufcient for h > .
Variable k might afect variable j in two or more period in the future via the efect
through other variables. For example,

y
t
y
t
y
t

.
. . .
. .

y
t
y
t
y
t

u
t
u
t
u
t

Ten,
y

; y

= A

.
.

; y

= A

.
.
.

To summarize,
. For bivariate or two groups of variables, IR analysis is equivalent to applying
Granger-causality test to VAR model;
. For testing the impact of one variable on the other within a high dimensional
( ) system, IR analysis can not be substituted by the Granger-causality test.
For example, for anVAR() process withdimensiongreater than, it does not
sufce to check the upper right-hand corner element of the coefcient matrix
in order to determine if the last variable is noncausal for the frst variable.
Test has to be based upon IR.
See Lutkepohl() and Dufor and Renault () for detailed discussion.

Testing causality for stationary series


. Impulse response and causal ordering
It is well known that residuals from a VAR model are generally correlated and ap-
plying the Cholesky decomposition is equivalent to assuming recursive causal or-
dering from the top variable to the bottom variable. Changing the order of the
variables could greatly change the results of the impulse response analysis.
. Causal analysis for bivariate VAR
For a bivariate system, y
t
, x
t
defned by
_
y
t
x
t
_ = _
A

(B} A

(B}
A

(B} A

(B}
_ _
y
t
x
t
_ + _
u
yt
u
xt
_
= _

(B}

(B}

(B}

(B}
_ _
u
yt
u
xt
_ + _
u
yt
u
xt
_
x
t
does not Granger-cause y
t
if

(B} = or
,i
= , for i = , , . Tis
condition is equivalent to A
,i
= , for i = , , ,p. In other words, this corre-
sponds to the restrictions that all cross-lags coefcients are all zeros which can be
tested by Wald statistics.
We nowturn to determining the causal direction for bivariate VAR system. For
ease of illustration, we shall focus upon bivariate AR() process so that A
i j
(B} =
A
i j
, i, j = , as defned above. Te results can be easily generalized to AR(p) case.
Four possible causal directions between x and y are:
. Feedback, H

, x y
H

= _
A

_
. Independent, H

x y
H

= _
A

_
. x causes y but y does not cause x, H

, y ] x
H

= _
A

. y causes x but x does not cause y, H

, x ] y
H

= _
A

_
Caines, Keng and Sethi() proposed a two-stage testing procedure for deter-
mining causal directions. In frst stage, test H

(null) against H

, H

(null) against
H

, and H

(null) against H

. If necessary, test H

(null) against H

, and H

(null)
against H

. See Liang, Chou and Lin() for an application.


. Causal analysis for multivariate VAR
Possible causal structure grows exponentially as number of variables increase. Pair-
wise causal structure might change whendiferent conditioning variables are added.
Caines, Keng and Sethi () provided a reasonable procedure.
. For a pair (X, Y}, construct bivariate VAR with order chosen to minimize
multivariate fnal prediction error (MFPE);
. Apply the stagewise procedure to determine the causal structure of X, Y;
. If a process X, has n multiple causal variables, y

, . . . , y
n
, rank these variables
according to the decreasing order of their specifc gravity which is the inverse
of MFPE(X, y
i
};
. For eachcaused variable process, X, frst construct the optimal univariate AR
model using FPE to determine the lag order. Te, add the causal variable,
one at a time according to their causal rank and use FPE to determine the
optimal orders at each step. Finally, we get the optimal ordered univariate
multivariate AR model of X against its causal variables;
. Pool all the optimal univariate AR models above and apply the Full Infor-
mation Maximum Likelihood (FIML) method to estimate the system. Fi-
nally perform the diagnostic checking with the whole system as maintained
model.
. Causal analysis for Vector ARMA model (h = )
Let X be n vector generated by
(B}X
t
= (B}a
t

X
i
does not cause X
j
if and only if
det(
i
(z},
(j)
(z}} =
where
i
(B} is the ith column of the matrix (z} and
(j)
(z} is the matrix (z}
without its jth column.
For bivariate (two-group) case,
_

(B}

(B}

(B}

(B}
__
X
it
X
t
_ = _

(B}

(B}

(B}

(B}
__
a
t
a
t
_
Ten, X
i
does not cause X
j
if and only if

(z}

(z}

(z}

(z} =
If n

= n

= , Ten, X
i
does not cause X
j
if and only if

(z}

(z}

(z}

(z} =
General testing procedures is:
. Build a multivariate ARMA model for X
t
,
. Derive the noncausality conditions in term of AR and MA parameters, say
R
j
(
l
} = , j = , . . . , K
. Choose a test criterion, Wald, LM or LR test.
Let
T(

l
} = (
R
j
(B}

l
|

l
}
kk
Let V(
l
} be the asymptotic covariance matrix of

N(

l
=
l
}. Ten the Wald
and LR test statistics are:

W
= NR(

l
}

|T(

l
}

V(

l
}T(

l
}|

R(

l
},

LR
= (L(

, X} L(

, X}}
where

is the MLE of under the constraint of noncausality.

To illustrate, let X
t
be a invertible -dimensional ARMA(,) model.
_

B
__
X
t
X
t
_ = _

B
__
a
t
a
t
_
X

does not cause X

if and only if

(z}

(z}

(z}

(z} =
(

}z + (

}z

= ,

=
For the vector,
l
= (

, the matrix
T(
l
} =

might not be nonsingular under the null of H

does not cause X

.
Remarks:
Te conditions are weaker than

= is a necessary condition for H

= is sufcient condi-
tion and

= , &

are sufcient for H

.
Let H

does not cause X

. Consider the following hypotheses:


H

= ;
H

=
H

= , and

=
Ten, H

=

H

, H

, H

.
Testing procedures:
. Test H

at level

. If H

is rejected, then H

is rejected. Stop.
. If H

is not rejected, test H

at level

. If H

is not rejected, H

cannot be
rejected. Stop
. If H

is rejected, test

H

= at level

. If

H

is rejected, then H

is
also rejected. If

H

is not reject ed, then H

is also not rejected.

Causal analysis for nonstationary processes


Te asymptotic normal or

distribution in previous section is build upon the


assumption that the underlying processes X
t
is stationary. Te existence of unit
root and cointegration might make the traditional asymptotic inference invalid.
Here, I shall briefy review unit root and cointegration and their relevance with
testing causality. In essence, cointegration, causality test, VAR model and IR are
closely related and should be considered jointly.
. Unit root:
What is unit root?
Te time series y
t
as defned in A
p
(B}y
t
= C(B}
t
has an unit root if A
p
(} =
, C(} .
Why do we care about unit root?
For y
t
, the existence of unit roots implies that a shock in
t
has permanent
impacts on y
t
.
If y
t
has a unit root, then the traditional asymptotic normality results usually
no longer apply. We need diferent asymptotic theorems.
. Cointegration:
What is cointegration?
When linear combination of two I() process become an I() process, then these
two series are cointegrated.
Why do we care about cointegration?
Cointegration implies existence of long-run equilibrium;
Cointegration implies common stochastic trend;
With cointegration, we can separate short- and long- run relationship among
variables;
Cointegration can be used to improve long-run forecast accuracy;

Cointegration implies restrictions on the parameters and proper accounting


of these restrictions could improve estimation efciency.
Let Y
t
be k-dimensional VAR(p) series with r cointegration vector (p r}.
A
p
(B}Y
t
= U
t
Y
t
= Y
t
+
p

i=

i
Y
ti
+ D
t
+ U
t
Y
t
= C
t

i=
(U
t
+ D
i
} + C

(B}(U
t
+ D
t
} + P

A
p
(} = =

C =

Cointegration introduces one additional causal channel (error correction


term) for one variable to afect the other variables. Ignoring this additional
channel will lead to invalid causal analysis.
For cointegrated system, impulse response estimates from VAR model in
level without explicitly considering cointegration will lead to incorrect con-
fdence interval and inconsistent estimates of responses for long horizons.
Recommended procedures for testing cointegration:
. Determine order of VAR(p). Suggest choosing the minimal p such that the
residuals behave like vector white noise;
. Determine type of deterministic terms: no intercept, intercept with con-
straint, intercept without constraint, time trend with constraint, time trend
without constraint. Typically, model with intercept without constraint is pre-
ferred;
. Use trace or
max
tests to determine number of unit root;
. Perform diagnostic checking of residuals;
. Test for exclusion of variables in cointegration vector;
. Test for weak erogeneity to determine if partial system is appropriate;

. Test for stability;


. Test for economic hypotheses that are converted to homogeneous restric-
tions on cointegration vectors and/or loading factors.
. Unit root, Cointegration and causality
For a VAR system, X
t
with possible unit root and cointegration, the usual causal-
ity test for the level variables could be misleading. Let X
t
= (X
t
, X
t
, X
t
}

with
n

, n

, n

dimension respectively. Te VAR level model is:


X
t
= J(B}X
t
+ u
t
=
k

i=
J
i
X
ti
+ u
t
Te null hypothesis of X

does not cause X

can be formulated as:


H

J
,
= J
,
= = J
k,
=
Let F
LS
be the Wald statistics for testing H

.
. If X
t
has unit root and is not cointegrated, F
LS
converges to a limiting distri-
bution which is the sum of

and unit root distribution. Te test is similar


and critical values can be constructed. Yet, it is more efcient and easier to
diference X
t
and test causality for the diferenced VAR.
. If there is sufcient cointegration for X

then F
LS

k
. , More specif-
cally, let A = (A

, A

, A

} be the cointegration vector. Te usual asymptotic


distribution results hold if rank(A

} = n

, ie. all X

appear in the cointegra-


tion vector.
. If there is not sufcient cointegration, ie. not all X

appears in the cointe-


gration vector, then the limiting distribution contain unit root and nuisance
parameters.
For the error correction model,
X
t
= J

(B}X
t
+ A

X
t
+ u
t
where , A are respectively the loading matrix and cointegration vector. Partition
, A conforming to X

, X

, X

. Ten, if rank(A

} = n

or rank(

} = n

, F
ML

k
. In other words, testing with ECM the usual asymptotic distribution hold
when there are sufcient cointegrations or sufcient loading vector.
Remark: Te Johansen test seems to assume sufcient cointegration or suf-
cient loading vector.
Toda and Yamamoto () proposed a test of causality without pretesting coin-
tegration. For an VAR(p) process and each series is at most I(k), then estimate the
augmented VAR(p+k) process even the last k coefcient matrix is zero.
X
t
= A

X
t
+ + A
p
X
tk
+ + A
p+k
X
tpk
+ U
t
and perform the usual Wald test A
kj,i
= , i = , , p. Te test statistics is
asymptotical

with degree of freedom m being the number of constraints. Te


result holds no matter whether X
t
is I() or I() and whether there exist cointegra-
tion.
As there is no free lunch under the sun, the Toda-Yamamoto test sufer the
following weakness.
Inefcient as compared with ECM where cointegration is explicitly consid-
ered.
Cannot distinguish between short run and long run causality.
Cannot test for hypothesis on long run equilibrium, say PPP which is for-
mulated on cointegration vector.
One more remark: Cointegration between two variables implies existence of
long-run causality for at least one direction. Testing cointegration and causality
should be considered jointly.
Causal analysis using graphical models
A directed graph assigns a contemporaneous causal fow among a set of variables
based on correlations and partial correlations. Te edge relationship of each pair
of variables characterizes the causal relationship between them. No edge indicates
(conditional) independence between two variables, whereas an undirected edge
(X Y) signifes a correlation with no particular causal interpretation. A directed
edge (Y X) means Y causes X but X does not cause Y conditional upon other
variables. A bidirected edge (X Y) indicates bidirectional causality between
these two variables. In other words, there is contemporaneous feedback between
X and Y.

To illustrate the main idea, let X, Y, Z be three variables under investigation.


Y X Z represents the fact that X is the common cause of Y and Z. Uncondi-
tional correlation between Y and Z is nonzero but conditional correlation between
Y and Z given X is zero. On the other hand, Y X Z says that both Y and Z
cause X. Tus, unconditional correlation between Y and Z is zero but conditional
correlation between Y and Z given X is nonzero. Similarly, Y X Z states
the fact that Y causes X and X causes Z. Again, being conditional upon X, Y is
uncorrelated with Z. Te direction of the arrow is then transformed into the zero
constraints of A(i, j}, i j. Let u
t
= (X
t
, Y
t
, Z
t
}

and then the corresponding A


matrix for the three cases discussed above denoted as A

, A

and A

are:
A

; A

; A

Several search algorithms are available and the PC algorithm seems to be the
most popular one (see Pearl (), and Spirtes, Glymour and Scheines () for
the details). In this paper, we adopt the PC algorithm and outline the main algo-
rithm as shown below. First, we start with a graph in which each variable is con-
nected by an edge with every other variable. We then compute the unconditional
correlation between each pair of variables and remove the edge for the insignifcant
pairs. We then compute the -th order conditional correlation between each pair
of variables and eliminate the edge between the insignifcant ones. We repeat the
procedure to compute the i-th order conditional correlation until i = N-, where
N is the number of variables under investigation. Fishers z statistic is used in the
signifcance test:
z(i, j|K} = ](n |K| }
()
l n(
| + r|i, j|K||
| r|i, j|K||
}
where r(|i, j|K|} denotes conditional correlation between variables, which i and j
being conditional upon the K variables, and |K| the number of series for K.
Under some regularity conditions, z approximates the standard normal distri-
bution. Next, for each pair of variables (Y, Z} that are unconnected by a direct
edge but are connected through an undirected edge through a third variable X, we
assign Y X Z if and only if the conditional correlations of Y and Z condi-
tional upon all possible variable combinations with the presence of the X variable
are nonzero. We thenrepeat the above process until all possible cases are exhausted.
If X Z, Z Y and X and Y are not directly connected, we assign Z Y. If there

is a directed path between X and Y (say X Z Y) and there is an undirected


edge between X and Y, we then assign X Y.
Pearl () and Spirtes, Glymour, and Scheines () provide a detailed ac-
count of this approach. Demiralp and Hoover () present simulation results to
show how the efcacy of the PC algorithm varies with signal strength. In general,
they fnd the directed graph method to be a useful tool in structural causal analysis.
Causality on the spectral domain
Causality on the time domain is qualitative but the strength of causality at each
frequency can be measured on spectral domain. To my mind, this is an ideal model
for analyzing permanent consumption theory. Let (x
t
, y
t
} be generated by
_
x
t
y
t
_ = _

(B}

(B}

(B}

(B}
_ _
e
xt
e
yt
_
Rewrite the above as
_
x
t
y
t
_ = _

(B}

(B}

(B}

(B}
_ _
e
xt
e
yt
_
where
_

(B}

(B}

(B}

(B}
_ = _

(B}

(B}

(B}

(B}
_ _


_
and
_
e
xt
e
yt
_ = _


_ _
e
xt
e
yt
_
f
x
(w} =

(z}|

+ |

(z}|

}}
where z = e
iw
.
Hosoyas measure of one-way causality is defned as:
M
yx
(w} = l og|
f
x
(w}
]|

(z}|

|
= l og| +
|

(z}|

}
|

(z} +

(z}

|
|

. Error correction model


Let x
t
, y
t
be I() and u
t
= y
t
Ax
t
be an I(). Te the error correction model is:
x
t
=

u
t
+ a

(B}x
t
+ b

(B}y
t
+ e
xt
y
t
=

u
t
+ a

(B}x
t
+ b

(B}y
t
+ e
yt
_
D(B}x
t
D(B}y
t
_ = _
( B}( b

B}

B + b

B( B}
( B}a

AB

AB + ( a

B}( B}
_ _
e
xt
e
yt
_
where D(B} arises from matrix inversion. Ten,
M
yx
(w} = l og| +
|

+ b

( z}|

}
|( z}( z b

} +

+ b

( z}}||

|
where z = e
iw
.
Sofwares
Again, the usual disclaim applies. Tey are subjective. Your choices might be as
good as mine. See Lin() for a detailed account.
. Impulse responses: Reduced form and structural form
VAR.SRC/RATS by Norman Morin
SVAR.SRC/RATS by Antonio Lanzarotti and Mario Seghelini
VAR/View/Impulse/Eviews
FinMetrics/Splus
. Cointegration:
CATS/RATS
COINT/GAUSS
VAR/Eviews
urca/R
FinMetrics/Splus

. Impulse response under cointegration constraint:


CATS,CATSIRFS/RATS
. Stability analysis:
CATS/RATS
Eviews
FinMetrics/Splus
Do and Dont Do list
. Dont Do
. Dont do single equation causality testing and draw inference on the causal
direction,
. Dont test causality between each possible pair of variables and then draw
conclusions on the causal directions among variables,
. Do not employ the two-step causality testing procedure though it is not an
uncommon practice.
People ofen test for cointegration frst and then treat the error-correction
term as an independent regressor and then apply the usual causality testing.
Tis procedure is fawed for the following reasons. First, EC term is esti-
mated and using it as an regressor in the next step will give rise to generated
regressor problem. Tat is, the usual standard deviation in the second step
is not right. Second, there could be more than one cointegration vectors and
linear combination of them are also cointegrated vectors.
. Do
. Examine the graphs frst. Look for pattern, mismatch of seasonality, abnor-
mality, outliers, etc.
. Always perform diagnostic checking of residuals:
Time series modelling does not obtain help from economic theory and de-
pends heavily upon statistical aspects of correct model specifcation. White-
ness of residuals are the key assumption.

. Ofen graph the residuals and check for abnormality and outliers.
. Be aware of seasonality for data not seasonally adjusted.
. Apply the Wald test within the Johansen framework where one can test for
hypothesis on long- and short- run causality.
. When you employ several time series methods or analyze several similar
models, be careful about the consistency among them.
. Always watch for balance between explained and explanatory variables in
regression analysis. For example, if the dependent variable has a time-trend
but explanatory variables are limited between and , then the regression
coefcient can never be a fxed constant. Be careful about mixing I() and
I() variables in one equation.
. For VAR, number of parameters grows rapidly with number of variables and
lags. Removing the insignifcant parameters to achieve estimation efciency
is strongly recommended. Te resulting IR will be more accurate.
Empirical Examples
. Evaluating the efectiveness of interest rate policy in Taiwan: an impulse re-
sponses analysis
Lin(a).
. Modelling information fow among four stock markets in China
Lin and Wu ().
. Causality between export expansion and manufacturing growth (if time per-
mits)
Liang, Chou and Lin ().
Reference Books:
. Banerjee, Anindya and David F. Hendry eds. (), Te Econometrics of
Economic Policy, Oxford: Blackwell Publishers
. Hamilton, James D. Time Series Analysis, NewJersey: Princeton University
Press,

. Clive Granger Forecasting Economic Time Series, nd edition Academic


Press .
. Johansen, S. () Likelihood-based inference in cointegrated vector au-
toregressive models, Oxford: Oxford University Press
. Lutkepohl, Helmut Introductionto multiple time series analysis, nded. Springer-
Verlag, .
. Pena, D, G. Tiao, and R. Tsay, eds. A course in time series analysis, NewYork:
John Wiley,
Reference Journal Articles:
. Amisano, Gianni and Carlo Giannini (), Topics in Structural Var Econo-
metrics, nd ed. New York: Springer-Verlag
. Bernanke, B. S. (), Alternative explanations of the money-income cor-
relation, Carnegie-Rochester Conference Series on Public Policy , , -.
. Blanchard, O. J. and D. Quah () Te dynamic efects of aggregate supply
and demand disturbance, American Economic Review, , -.
. Caines, P. E., C. W. Keng and S. P. Sethi (), Causality analysis and mul-
tivariate autoregressive modelling with an application to supermarket sales
analysis, Journal of Economic Dynamics and Control , -.
. Dufor, J-M and E Renault (), Short run and long run causality in time
series: theory, Econometrica, -.
. Gordon, R (), Te time varying NAIRU and its implications for eco-
nomic policy, Journal of Economic Perspectives, :, -.
. Granger, CWJ and Jin-Lung Lin, , Causality in the long run, w Econo-
metric Teory, , -,
. Phillips, P.C.B. () Impulse Response andForecast Error Variance Asymp-
totics in Nonstationary VARs, Journal of Econometrics, -.

. Liang, K.Y, W. wen-linChou, andJin-Lung Lin(), Causality betweenex-


port expansion and manufacturing growth: further evidence from Taiwan,
manuscript
. Lin, Jin-Lung, (), An investigation of the transmission mechanism of
interest rate policy in Taiwan, Quarterly Review, Central Bank of China, ,
(), -. (in Chinese).
. Lin,Jin-Lung (), A quick review on Econometric/Statistics sofwares,
manuscript.
. . Lin, Jin-Lung and Chung Shu Wu (), Modeling Chinas stock markets
and international linkages, Journal of the Chinese Statistical Association, ,
-.
. Swanson, N. and C. W. J. Granger (), Impulse response functions based
on the causal approach to residual orthogonalization in vector autoregres-
sions, Journal of the American Statistical Association, , -.
. Lutkepohl, H (), Testing for causation between two variables in higher
dimensional VARmodels, Studies inApplied Econometrics, ed. by H. Schneeweiss
and K. Zimmerman. Heidelberg:Springer-Verlag.
. Toda, Hiro Y.; Yamamoto, Taku, Statistical inference in vector autoregres-
sions with possibly integrated processes, Journal of Econometrics , -,
March , , pp. -.
. Yamada, Hiroshi; Toda, Hiro Y. Inference in possibly integrated vector au-
toregressive models: some fnite sample evidence, Journal of Econometrics
, , June

Vous aimerez peut-être aussi