Vous êtes sur la page 1sur 8

Program ssvs.

Step 1. Generate VAR data

Simulate VAR DGP from model:

Yt = Yt 1 + t
where Yt is of dimensions [T x p] with T=600 and p=6.
with coefficients:

1 0 0 0 0 0
0 1 0 0 0 0

0 0 1 0 0 0
= (Note that no CONSTANT is included)
0 0 0 1 0 0
0 0 0 0 1 0

0 0 0 0 0 1

1 0.5 0.5 0.5 0.5 0.5


0 1 0 0 0 0

0 0 1 0 0 0
=
0 0 0 1 0 0
0 0 0 0 1 0

0 0 0 0 0 1

, where = ' so that is the Cholesky decomposition (upper triangular matrix) of .

These are the defaults for the function simvardgp.m ; that is if you write:

y = simvardgp();

You get a matrix Y generated from the specification above.

Another option is to provide yourself the number of lags, observations, series, phi and psi
parameters in the following way:

y = simvardgp(T,N,L,PHI,PSI);

For more information about the inputs in simvardgp.m see the documentation inside the file or type:

help simvardgp

in the MATLAB command window.


Step 2. Data transformations

The model we are using in the SSVS algorithm is of the form:

L
yt' = zt' C + yt' j Aj + t' ,
j =1

where for t = 1,2,..., T and zt is an h-dimensional vector of exogenous variables, the lag L is a
known positive integer, the regression coefficients C and A j are h p and p p unknown
matrices and the error terms are iid Normal with covariance matrix .

Define xt' = (zt' , yt' 1 , L, )


yt' L . The VAR model above can be written in matrix form:

Y = X + ,

C
y1' x1' 1'
A
where Y = M , X = M , = 1 , = M . Here Y and are T p matrices, is a
y' x' M '

T T A T
L
(h + Lp ) p matrix, xt' is a 1 (h + Lp ) row vector, and X is a T (h + Lp ) matrix of observations.

In the program (and the VAR DGP) we assume that there is no constant term and exogenous ( zt )
variables, so that the model is of the form (in vector notation):

Yt = Yt 1 + t

for appropriately defined vectors.

The only thing you have to choose in the program is the number of lags L. All other information
like the dimensions of matrix Yt (the dimensions are T and p) are extracted automatically. Matrix of
lagged Ys is generated automatically using function mlag.m

Step 3. Names and storage spaces

Before we proceed with the definition of priors, we set some Gibbs related preliminaries (number of
iterations, burn-in draws, thin value) and then the storage matrices for the draws and some
intermediate variables used in estimation of the posteriors.

Step 4. Priors

Let n = (h + Lp ) p , the total number of regression coefficients. Denote = vec( ) = (1 , 2 ,L, n ) .


For j = 2,L, p, let j = ( 1 j ,L, j 1, j )' . Write = ( 2 ,L, p ) and = ( 11 ,L, pp ) .

That is, = ( 11 ,L, pp ) contains all the diagonal elements of , while the vectors
j = ( 1 j ,L , j 1, j )' are obtained as (for a 6 6 matrix ):
We propose hierarchical priors on ( , , ) . Here we assume that all the parameters are
restricted, so that if m is the number of restrictions, in that case m = n = (h + Lp ) p .

Inside the program:


- All the matrices that are used to store the draws are of the form name plus the suffix _draws

- PHI_M is the MLE of . This is estimated as:


M = ( X ' X ) 1 X ' Y

- SSE is the MLE sum of squared errors (of residuals) such that:
1
= SSE

- phi_m_vec is the vector created from stacking the columns of PHI_M

Note: the vector = ( 11 ,L, pp ) is not defined in the program. We will see later that we will
sample from its squared elements!

(i) Priors on = vec( ) = (1 , 2 ,L, n ) :


Let = ( 1 , 2 ,L, n ) be a vector of 0 1 variables and let D = diag (h1 , h2 ,L, hn ) where
, if i = 0,
hi = 0i
1i , if i = 1,
with preselected constatnts 0i < 1i . The prior for m (remember, m = n = (h + Lp ) p ) conditional on
is
(m | ) ~ N m (0 , DRD ) ,

where R is a preselected correlation matrix. Under this prior, each element m has the distribution
(i | i ) ~ (1 i ) N (0, 02i ) + i N (0, 12i )
Inside the program:
- tau_0 and tau_1 are 0i and 1i respectively. These are (0.1 , 5) as suggested by the authors.
- f_m is the mean of m . This is always zero, but we define a variable for it, anyway.
0i , if i = 0,
- h_i is hi =
1i , if i = 1,
- D is D = diag (h1 , h2 ,L, hn )
- R is the preselected correlation matrix R . This is the identity matrix as suggested by the authors.
- DRD_j is the prior covariance matrix DRD .

(ii) Priors on = ( 1 , 2 ,L, n ) :


We assume that the elements of are independent Bernoulli pi (0,1) random variables so that
P ( i = 1) = pi , P ( i = 0) = 1 pi , i = 1,2,L, m(= n)

Inside the program:


- gamma is the vector = ( 1 , 2 ,L, n ) . Starting values = all elements equal to 1.
- p_i is pi (0,1) which is set to 0.5, like in the paper.

(iii) Priors on = ( 12 , 13 , 23 L, p1, p )' :


For j = 2,L, p, let j = (1 j ,L , j 1, j )' be a vector of 0 1 variables, and let
D j = diag (h1 j ,L, h j 1, j ) where
0ij , if ij = 0,
hij =
1ij , if ij = 1,

with preselected constants 0ij < 1ij . Letting R j be a preselected ( j 1) ( j 1) correlation matrix,
the prior we consider for j , conditional on j , is

iid
( j | j ) ~ N j 1 (0, D j R j D j ), for j = 2,L, p .

Under this prior, each element of j has distribution

(ij | ij ) ~ (1 ij ) N (0, 02ij ) + ij N (0, 12ij ), for i = 1,L, j 1.

Inside the program:


- kappa_0 and kappa_1 are 0ij and 1ij respectively. These are (0.1 , 5) as suggested by the authors.
- D_j is D j = diag (h1 j ,L, h j 1, j )
- DiRiDi is D j R j D j , that is the prior variance-covariance matrix
- R_j is a cell array containing the ( j 1) ( j 1) preselected correlation matrices R j . This is the
identity matrix as suggested by the authors.
0ij , if ij = 0,
- h is a cell array containing hij = elements
1ij , if ij = 1,
A short note on cell arrays:
While a conventional cell can accept one element (number scalar), like it is in an Excel
spreadsheet, MATLAB allows you to define cell arrays. In a cell array each cell can contain from a
simple scalar to a vector or even a matrix. Thus when we define in the program:
for kk_1 = 1:(p-1)
omega{kk_1} = ones(kk_1,1); % Omega_j
end

this will create a cell array where the first cell (for kk_1 = 1) is 1 1 vector of ones (and thus the
number/scalar 1), the second cell (for kk_1 = 2) is 2 1 vector of ones, the third cell (for kk_1 =
3) is 3 1 vector of ones, and so on. This will look like in MATLAB as below:

Standard operators are INVALID with cell arrays. For example:


5*omega or 5*omega(2,1)
will not work. What is valid is
5*omega{2} or 5*cell2mat(omega(2))
The function cell2mat converts a cell to a matrix. I used this method instead of using
name{cell_number} (like above in omega{2})
Note:
The same trick is used for all the other matrices that their dimension is growing with there index j.
(iv) Priors on = (2 ,L, p )' . We assume that the elements of are independent Bernoulli
q ij (0,1) random variables so that
P (ij = 1) = qij , P (ij = 0) = 1 qij , i = 1,L, p, j = 1,L, p 1

Inside the program:


- omega is a cell array containing j = (1 j ,L , j 1, j )'
- q_ij is q ij (0,1)

(v) Priors on = ( 11 ,L, pp ) :


ind
Assume that ii2 ~ gamma(ai , bi ) distributions. Here (ai , bi ) are positive constants. So for
i = 1,L, p , ii has the prior density
2biai 2( ai 1)
[ ii ] = ii exp(bi ii2 ), for ii > 0 .
(ai )

Inside the program:


- psi_ii_sq is a p 1 vector with elements ii2
- a_i and b_i are (ai , bi )

We have finished with the prior specification. Now we define some matrices that we will use in
order to update the priors and thus get the posteriors. These matrices as logical all come from the
(conditional) likelihood function(s).

1
Remember we have defined = SSE (ML estimator of error covariance)? Write

SSE = S ( ) = ( sij ) . For j = 2,L, p, define s j = ( s1 j ,L, s j 1, j ) . Let S j be the upper-left j j
submatrix of SSE = S ( ) .

Inside the program:


- S is a cell array containing S j for j = 2,L, p
- s is a cell array containing s j = ( s1 j ,L, s j 1, j ) for j = 2,L, p

---------------------------------------------------------------------------------------------------------------------
Step 5. Sampling from the posterior

1. Draw ( ( k ) | ( k 1) , ( k 1) , ( k 1) ;Y ) from the gamma distribution

1
( ii2 | , , ;Y ) ~ gamma ai + T , Bi ,
2
where
1
b1 + 2 s11 , if i = 1,
Bi =
1
{ }
bi + sii si[ S i 1 + ( Di Ri Di ) 1 ]1 si , if i = 2,L, p.
2

2. Draw (( k ) | ( k ) , ( k 1) , ( k 1) , ( k 1) ;Y ) from normal distribution

( j | , , , ;Y ) ~ N j 1 ( j , j ) ,
where
{
j = jj S j 1 + ( Di Ri Di ) 1 }
1
si
{
j = S j 1 + ( Di Ri Di ) 1 }1

3. Draw (( k ) | ( k ) , ( k ) , ( k 1) , ( k 1) , ( k 1) ;Y ) from Bernoulli distribution

ij ~ Bernoulli (uij1 uij1 + uij 2 ) ,


where
1 ij2
uij1 = exp 2 qij
1ij 2
1ij

1 2

uij 2 = exp ij2 (1 qij ) .


0ij 2
0 ij

4. Draw (( k 1) | ( k 1) , ( k ) , ( k 1) ; Y ) from normal distribution, where (k ) is computed from (k )


and (k )

( | , , , ;Y ) ~ N m ( , ) ,
where

{
= ( ) ( XX' ) + ( DRD) 1 } ({( ) ( XX' )}
1

( )
+ ( DRD) 10 )
= {( ) ( XX' ) + ( DRD) } 1 1

( )
where 0 is the prior mean of and are the elements of the stacked matrix of MLE
( )
coefficients, that is vec( ) = vec ( X ' X ) 1 X ' Y or phi_m_vec in the program.
M

5. Draw ( ( k ) | ( k ) , ( k ) , ( k ) ;Y ) from Bernoulli distribution

i ~ Bernoulli (ui1 ui1 + ui 2 )


where
1 i2
ui1 = exp 2 pi
0i 2 0i
1 2
ui 2 = exp i 2 (1 pi )
1i 2 1i

And thats it! Program should converge to these conditionals quickly (20,000 iterations) as the
authors claim.

Vous aimerez peut-être aussi