Académique Documents
Professionnel Documents
Culture Documents
Yt = Yt 1 + t
where Yt is of dimensions [T x p] with T=600 and p=6.
with coefficients:
1 0 0 0 0 0
0 1 0 0 0 0
0 0 1 0 0 0
= (Note that no CONSTANT is included)
0 0 0 1 0 0
0 0 0 0 1 0
0 0 0 0 0 1
These are the defaults for the function simvardgp.m ; that is if you write:
y = simvardgp();
Another option is to provide yourself the number of lags, observations, series, phi and psi
parameters in the following way:
y = simvardgp(T,N,L,PHI,PSI);
For more information about the inputs in simvardgp.m see the documentation inside the file or type:
help simvardgp
L
yt' = zt' C + yt' j Aj + t' ,
j =1
where for t = 1,2,..., T and zt is an h-dimensional vector of exogenous variables, the lag L is a
known positive integer, the regression coefficients C and A j are h p and p p unknown
matrices and the error terms are iid Normal with covariance matrix .
Y = X + ,
C
y1' x1' 1'
A
where Y = M , X = M , = 1 , = M . Here Y and are T p matrices, is a
y' x' M '
T T A T
L
(h + Lp ) p matrix, xt' is a 1 (h + Lp ) row vector, and X is a T (h + Lp ) matrix of observations.
In the program (and the VAR DGP) we assume that there is no constant term and exogenous ( zt )
variables, so that the model is of the form (in vector notation):
Yt = Yt 1 + t
The only thing you have to choose in the program is the number of lags L. All other information
like the dimensions of matrix Yt (the dimensions are T and p) are extracted automatically. Matrix of
lagged Ys is generated automatically using function mlag.m
Before we proceed with the definition of priors, we set some Gibbs related preliminaries (number of
iterations, burn-in draws, thin value) and then the storage matrices for the draws and some
intermediate variables used in estimation of the posteriors.
Step 4. Priors
That is, = ( 11 ,L, pp ) contains all the diagonal elements of , while the vectors
j = ( 1 j ,L , j 1, j )' are obtained as (for a 6 6 matrix ):
We propose hierarchical priors on ( , , ) . Here we assume that all the parameters are
restricted, so that if m is the number of restrictions, in that case m = n = (h + Lp ) p .
- SSE is the MLE sum of squared errors (of residuals) such that:
1
= SSE
Note: the vector = ( 11 ,L, pp ) is not defined in the program. We will see later that we will
sample from its squared elements!
where R is a preselected correlation matrix. Under this prior, each element m has the distribution
(i | i ) ~ (1 i ) N (0, 02i ) + i N (0, 12i )
Inside the program:
- tau_0 and tau_1 are 0i and 1i respectively. These are (0.1 , 5) as suggested by the authors.
- f_m is the mean of m . This is always zero, but we define a variable for it, anyway.
0i , if i = 0,
- h_i is hi =
1i , if i = 1,
- D is D = diag (h1 , h2 ,L, hn )
- R is the preselected correlation matrix R . This is the identity matrix as suggested by the authors.
- DRD_j is the prior covariance matrix DRD .
with preselected constants 0ij < 1ij . Letting R j be a preselected ( j 1) ( j 1) correlation matrix,
the prior we consider for j , conditional on j , is
iid
( j | j ) ~ N j 1 (0, D j R j D j ), for j = 2,L, p .
this will create a cell array where the first cell (for kk_1 = 1) is 1 1 vector of ones (and thus the
number/scalar 1), the second cell (for kk_1 = 2) is 2 1 vector of ones, the third cell (for kk_1 =
3) is 3 1 vector of ones, and so on. This will look like in MATLAB as below:
We have finished with the prior specification. Now we define some matrices that we will use in
order to update the priors and thus get the posteriors. These matrices as logical all come from the
(conditional) likelihood function(s).
1
Remember we have defined = SSE (ML estimator of error covariance)? Write
SSE = S ( ) = ( sij ) . For j = 2,L, p, define s j = ( s1 j ,L, s j 1, j ) . Let S j be the upper-left j j
submatrix of SSE = S ( ) .
---------------------------------------------------------------------------------------------------------------------
Step 5. Sampling from the posterior
1
( ii2 | , , ;Y ) ~ gamma ai + T , Bi ,
2
where
1
b1 + 2 s11 , if i = 1,
Bi =
1
{ }
bi + sii si[ S i 1 + ( Di Ri Di ) 1 ]1 si , if i = 2,L, p.
2
( j | , , , ;Y ) ~ N j 1 ( j , j ) ,
where
{
j = jj S j 1 + ( Di Ri Di ) 1 }
1
si
{
j = S j 1 + ( Di Ri Di ) 1 }1
1 2
( | , , , ;Y ) ~ N m ( , ) ,
where
{
= ( ) ( XX' ) + ( DRD) 1 } ({( ) ( XX' )}
1
( )
+ ( DRD) 10 )
= {( ) ( XX' ) + ( DRD) } 1 1
( )
where 0 is the prior mean of and are the elements of the stacked matrix of MLE
( )
coefficients, that is vec( ) = vec ( X ' X ) 1 X ' Y or phi_m_vec in the program.
M
And thats it! Program should converge to these conditionals quickly (20,000 iterations) as the
authors claim.