Vous êtes sur la page 1sur 10

A Short Course in Modern Macroeconometric Modelling Topic 1: Overview

Adrian Pagan July 26, 2004

Contents
1 2 Introduction 1

2.1 2.2 2.3 2.4

Analysis of Miniature Models

Solution and Format . . . . . . . . . . . . . . . . Estimation . . . . . . . . . . . . . . . . . . . . . . The nature of the VARs from miniature models . Using Loose Theory in Macroeconometric Models

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

4 6 6 7

3 Dierences between Mini-T and Base Models 4 References

8 9

1 Introduction
Theoretical models have always been constructed to illuminate and foster an understanding of how the interactions between agents and institutions can account for observed phenomena. The art of designing a useful theoretical model is that it should be complex enough to provide a good description of the principal forces at work in producing a particular phenomenon but not too complex that the explanation becomes clouded by allowing for too many factors. Macroeconomics has always had such miniature general 1

Degree of Theoretical Coherence

Dynamic Stochastic General Equilibrium Models Incomplete Dynamic Stochastic General Equilibrium Models Models with Explicit Long Run Equilibrium Models with Implicit Long Run Equilibrium SVARs VARs
Degree of Empirical Coherence

Figure 1: equilibrium models, and they have gured prominently in textbooks. We will refer to them as Mini-T models, the T indicating theoretic. Examples would be the IS-LM description of the Keynesian model, the AD-AS extension of IS-LM, the Mundell-Flemming-Dornbusch (MFD) model, and, more recently, the New-Keynesian Policy Model (NKPM). Each of these was designed to account for a feature of the policy environment that had become increasingly important - rising prices in the case of AS-AD and the new world of de-regulated capital markets and exible exchange rates for MFD. The NKPM reects the resurgance of the Phillips curves and the emphasis on Taylor rules for the setting of interest rates - see Allsop and Vines (2000) for a useful description of this model. These Mini-T models omit much, and much is assumed in their construction, but they have served their purpose of directing thought about the macro economy. The gure above will facilitate our discussion throughout this course. It shows the trade o that always exists between building models that have a strong theoretical perspective and those that are strongly oriented towards tting data sets. Models on the y-axis can be regarded as the Mini-T models we have just described. Models on the x axis are generally miniature statistical models chosen so as to produce a close t to the data. The rst modication of Mini-T models in the direction of the data involved moving away from the old paradigm of deterministic models that had perfect foresight. Instead an emphasis was placed upon the importance of shocks and expectations for macro-economic outcomes. Hence Mini-T 2

models were modied to describe agents making choices in the face of these shocks. Their core becomes a set of Euler equations linking the currentperiod, present and expected future values of the variable (or variables). A long-run growth path is generally implied by the models and features have been introduced into them that might allow for departures from this path for extensive periods of time, in particular frictions in production and labour markets and inertia in expenditure decisions. These models are the class of Dynamic Stochastic General Equilibrium (DSGE) models studied in most graduate macro-economic courses today. They provide reasonably exible tools for theoretical analysis and increasingly incorporate many types of constraints that are viewed as being important to actual outcomes. Now macroeconomics has a distinguishing feature that the major consumers of the output are policy makers and their advisers. This group generally recognizes the need for some theoretical model to guide their deliberations, although often it may have simply become part of their thought processes rather than being spelled out explicitly. Consequently, manipulation of miniature models is often a primary input into the development of an understanding of the broad outlines of the environment to which a policy response is to be made. But such models are rarely able to capture the complexities of any given situation e.g. referring to aggregate demand, as in an IS curve, rather than its components, is unlikely to produce a very convincing analysis for any policy discussion. Moreover, policy makers have increasingly been required to be precise about the arguments in support of a particular policy action (and sometimes the information that is an input into it, such as forecasts), and this points to the need to expand the size of the model while retaining the clarity that a strong theoretical perspective brings to analysis. Models have emerged, and are emerging, that aim to do this. We will refer to these as base models. They are a heterogeneous group and in history have taken a number of forms depending on what has been assumed about the shocks and how precisely the long-run equilibrium paths are determined. Thus in the gure above the incomplete DSGE models and hybrid models which feature either an explicit or implicit long-run equilibrium path - are members of this class. Generally, it is not intended that they provide a very close t to the data and extra adjustments are needed to turn the base model into an operational model that could be used in a policy environment driven by forecasts. Base models are the core of a policy system and exist to anchor the discussion of policy options in some consistent economic framework rather 3

than allowing it be be sidetracked into a debate about the idiosyncracies of particular data sets. Base models age generally quite large but sometimes miniature versions of them have become popular for discussion and analysis. The most popular of these has been the New Keynesian policy model (NKPM), which can be thought of as an extension of the IS-LM, AS-AD Mini-T models. It is sometimes used as a theoretical model but, increasingly, it involves using variants that aim to match data, so that the versions we will discuss in the lectures are down the curve rather than on the axis. Because of the close connection between base and miniature models we will spend a large section of these lectures looking at miniature models. Understanding the diculties in specication, solution , estimation and inference that can arise with these is really central to understanding the problems that can arise in base models. At the end of these lectures we will return to the base models often used in practice and try to tie the discussion together.

Analysis of Miniature Models

2.1 Solution and Format


After linearization modern Mini-T models can be thought of as a set of equations with the stylized structure where ut are the stochastic variables ( shocks) that drive the system. All variables in yt are measured as deviations from steady state values- in the case of variables such as output these are log deviations from a steady state path and, for variables such as interest rates and ination, they are level deviations from a constant steady state rate. The model is quantied by setting Bj , F to some values that have emerged as part of the exercise producing the model. The solution to this model has the general form - see Binder and Pesaran (1995)-

B0y = AE (y +1) + B1y 1 + Fu


t t t t

(1)

y = Py 1 + D
t t

j =0

S Eu+
j t t

(2)

and P, S are functions of the coecients Bj and A. Richard Dennis lectures will be more specic about the range of methods
where

D = (B0 AP )1F

that can be used to solve such models. For now it is important to note that the ultimate dynamic structure for yt will come from two sources. One of these derives from the theoretical structure - that is the Euler equations and constraints - and is represented by P. The second source of dynamics stems from the nature of the ut . To analyse the latter, we adopt the assumption, common to many Mini-T models, that ut follows a VAR(1)
ut

= u 1 +
t

(3)

and so the solution for yt will be


yt

j j where G = D j =0 S . This solution method requires that one be able to nd the Et ut+j under a variety of processes rather than just the VAR(1), and also to be able to estimate . For this reason the rst step in the lectures will be to look at time series models for a single member of ut . Specically we

= =

P yt1 + Gut P yt1 + Gut1 + G t .

(4) (5)

look at the class of covariance stationary processes and important members of this class such as the AR, MA and ARMA processes. The case when = 1 is of particular interest as it involves a unit root in the ut process and brings in the concepts of integrated series and the permanent components of such series. A close look at such concepts and how they change our approach to estimation and inference will be needed. Now, if the rank of G equals dim(ut )(which seems likely), then G+ = (G G)1 G is the generalized inverse of G and so ut = G+ (yt P yt1 ) and
this can be used to get
yt

= P y 1 + GG+(y 1 P y 2) + G = (P + GG+ )y 1 GG+ P y 2 + G .


t t t t t t t

(6)

This expression makes clear that the intrinsic dynamics described by the theoretical model (represented by P ), is augmented by extrinsic dynamics captured by ,with the consequence that the evolutionary process for yt changes from a VAR(1) to a VAR(2). Now the system we descibed above is in terms of the number of variables yt that is involved in the model. We can show that if we only looked at a single variable ykt then the VAR system above would become an ARMA system. 5

Since we are often interested in building models in order to discuss items such as business cycles, which are movements in a single variable representing economic activity, we will need to ask how the characteristics of such a series maps into a business cycle. This introduces a new topic and one can make some broad points about the nature of the business cycle. As the lectures move on we will need to ask the question however of how the nature of the system above maps into the ARMA process for output since it is only by doing that that we can begin to answer questions like what drives the business cycle.

2.2 Estimation
In order to use a miniature model the parameters underlying its Euler equations need to be estimated. This might be done directly or indirectly. The main direct method has been GMM and so we will need to review the theory of that estimation method. Generally in macroeconomics it comes down to doing instrumental variables estimation and so the literature on weak instruments that has emerged over the past decade needs to be considered to see whether it is relevant in this context. The alternative indirect approach involves working with the solved VAR and performing estimation with FIML, and so we need to look at the relative merits of the two approaches. Although the answers are likely to be context dependent we can gain a lot of insight into the issues by working with a particular miniature model- a variant of the New Keynesian policy model.

2.3 The nature of the VARs from miniature models


In many miniature models the solution has more structure than given above. Specically it is possible to nd a set of variables among the yt , y1t ,which are functions of the remaining variables y2t (and its lags) and it is y2t which follows a VAR. Such a structure means that the dynamics in y1t actually derive from y2t and so there is a common source of dynamics among the y1t . This can be seen in miniature models such as the RBC model - see Giannone et al .(2003). It results in reduced rank VAR dynamics. In the econometrics literature such rank deciency ocurred in the common-trend/common-cycle representation of Vahid and Engle (1993) and its presence in Mini-T models has been recently noted by Giannone (2003). This is a dierent type of rank reduction than would occur if there were fewer shocks in the B-model 6

than variables i.e. dim(ut ) < dim(yt). Such a static rank reduction aects the covariance matrix of the yt . Nevertheless in both cases the VAR can be written as depending upon some factors as in Forni et al (2003). A special case of interest is when some of the eigenvalues of are unity. i.e. some of the shocks are permanent. Regardless of the nature of the shocks it is always possible to nd the moving average representation for yt i.e. the form:
yt = C (L) t ,

(7)

are the j period impulse responses. In miniature models the Cj can generally be found analytically while simulations of the base model provide numerical solutions. There will be dierent values for the Cj , depending on whether the shocks are strictly permanent or simply pure impulses i.e. = I or = 0. We will refer to these polar cases as the permanent and transitory shock cases. In most instances the shocks ut are items like productivity, risk premia, money etc and are dened relative to the model, but they are smaller in number than the yt i.e. dim(ut ) < dim(yt ). Which shocks should be used when attempting to model a given data set, and deciding on whether they should be permanent or transitory, is a dicult issue, but one that is necessary for any empirical work. When there are permanent shocks the value of C will indicate the long run responses. Any variable whose associated column in C has non-zero elements will be an I(1) variable and the rank of C will indicate how many co-integrating vectors there are among the I(1) variables. These vectors are found as the that sets C = 0. As we will discuss later the are not unique and some identifying assumptions need to be placed upon them.

can be found, where the elements Cj in the polynomial C (L) = C0 + C1 L + ...

Once the have been found it is possible to re-write the VAR as an ECM. Combinations of permanent and transitory shocks can be handled in a similar way. Some discussion of this is in Levtenchova et al (1998).

2.4 Using Loose Theory in Macroeconometric Models


We know that it is possible to start with a miniature theoretical model and to show that it would imply a VAR in its variables. In fact the Euler equations generally have contemporaneous variables entering into them as well as lagged values i.e. B0 is not the identity matrix, and such systems are 7

called Structural VARs (SVARs). VAR models are on the x axis but there has always been interest in trying to move up the curve by placing some restrictions upon the VAR equations that are loosely guided by the ideas from miniature theoretical models i.e. to work with a SVAR model that is loosely related to theoretical models. This is also true of models that feature permanent shocks and such models are structural vector ECMs (SVECMs). We therefore need to ask about how successful these stategies are in incorporating theoretical ideas. As we will see there are serious questions to be raised about the assumptions used in the transition from VARs to SVARs. These models can be useful but one needs to be aware of their limitations and to check whether there are problems (where this is possible).

3 Dierences between Mini-T and Base Models


How do base models dier from the miniature models discussed above? Fundamentally. base models approach the issue of modelling as responding to the question of how one might adapt a miniature theoretical model so that it produces a better match with the data rather than taking the reverse strategy of asking how one might incorporate some theoretical ideas into a statistical model that closely matches the data, as in a VAR. So the philosophical difference is which axis one starts from and consequently whether one moves up or down the curve. But there are other dierences occasioned by the fact that base models are used in the actual policy decisions. This means that they are much larger than miniature theoretical economic and statistical models. This fact tends to have an impact upon the specication, solution and even estimation methods. An important dierence that has many ramications is that the typical base model in use in the policy process- for example the QPS model at the Bank of Canada - Coletti et al (1996)- and the FPS model at the Reserve Bank of New Zealand-Black et al (1997) - only allow two types of process for shocks- they are either purely transitory or purely permanent. This restricts the eigenvalues of to be either unity or zero and contrasts to typical DSGE models where is set to intermediate values e.g. Smets and Wouters (2002). In the polar cases used in base model model simulations one is eectively 8

endowing agents with perfect foresight as either the new permanent value for the shock is known or it is known that the shock only lasts for one period. That means that the models are solved using perfect foresight algorithms. Because the shock processes are only of two types Pagan (2003) refers to these base models as incomplete DSGE (IDSGE) models. There are both advantages and disadvantages to working with IDSGE models. The disadvantage is that one gives up some generality. The main advantage arises in a forecasting context where often paths for shocks are to be specied based on the priors of the policy makers, and these rarely t a simple structure like (3). Moreover, not having to specify a path for the shocks simplies computation of optimal solutions a good deal and produces a neat separation between what the theory can provide in the way of dynamics and what is being imposed as an auxiliary assumption. At the end of the lectures we will consider these dierences and how one might bridge some of the gaps in much more detail.

4 References
Allsopp, C. and D. Vines (2000), The Assessment: Macroeconomic Policy" , Oxford Review of Economic Policy, 16, 1-32. Binder,M and M.H.. Pesaran (1995), Multivariate Rational Expectations Models and Macroeconomic Modelling: A Review and Some New Result, in M.H. Pesaran and M Wickens (eds) Handbook of Applied Econometrics: Macroeconomics, Basil Blackwell, Oxford. Black, R., V. Cassino, A. Drew, E. Hansen, B. Hunt, D. Rose and A. Scott (1997), The Forecasting and Policy System: The Core Model, Reserve Bank of New Zealand Research Paper 43. Coletti,D., B. Hunt, D. Rose and R. Tetlow (1996), The Bank of Canadas New Quarterly Projection Model, Part 3: The dynamic model: QPM , Bank of Canada, Technical Report 75. Forni, M., M.Lippi and L. Reichlin (2003), Opening the Black Box: Structural Factor Models versus Structural VARs, ECARES, Universite Libre de Bruxelles, mimeo. Giannone, D., L. Reichlin and L. Sala (2002), VARs, Common Factors and the Empirical Validation of Equilibrium Business Cycle Models Journal of Econometrics (forthcoming) 9

S.Levtchenkova, A.R. Pagan and J. Robertson (1998) Shocking Stories , Journal of Economic Surveys, 12, 1998, 507-532. Pagan,A.R. (2003) Report on Modelling and Forecasting at the Bank of England . Bank of England Quarterly Bulletin, Spring, 1-29 Smets, F. and R. Wouters (2002), An Estimated Stochastic Dynamic General Equilibrium Model of the Euro Area , Working Paper #171, European Central Bank Vahid, F. and R.F. Engle, 1993, Common Trends and Common Cycles. Journal of Applied Econometrics, 8, 341-360.

10

Vous aimerez peut-être aussi