Vous êtes sur la page 1sur 23

Journal of International Money and Finance 73 (2017) 252–274

Contents lists available at ScienceDirect

Journal of International Money and Finance

journal homepage: www.elsevier.com/locate/jimf

Rethinking monetary policy after the crisis q

Frederic S. Mishkin ⇑
Graduate School of Business, Columbia University, United States
National Bureau of Economic Research, United States

a r t i c l e i n f o a b s t r a c t

Article history: This lecture examines how the recent global financial crisis changes our thinking about
Available online 20 February 2017 how monetary policy should be conducted. It starts with a discussion of the science and
practice of monetary policy before the crisis and then uses the lessons from the crisis to
JEL classification: argue how the practice of monetary policy should be rethought along six dimensions: flex-
E58 ible inflation targeting, response to asset price bubbles, dichotomy between monetary pol-
E52 icy and financial stability policy, risk management and gradualism, fiscal dominance, and
forward guidance.
Ó 2017 Elsevier Ltd. All rights reserved.

1. Introduction

Before the global financial crisis that started in August 2007, advances in theory and in empirical work in the field of mon-
etary economics led both academic economists and policy-makers to argue that there was now a well-defined ‘‘science of
monetary policy”. In this lecture, I examine how this science and practice of monetary policy needs to modified in light
of what we have learned from the recent global financial crisis.

2. The science and practice of monetary policy before the crisis

To discuss the science and practice of monetary policy before the crisis, I first outline nine basic scientific principles,
derived from theory and empirical evidence, which guided thinking within almost all central banks before the crisis, and
then drill down further into the theory of optimal monetary policy.

2.1. Nine basic scientific principles

The nine basic scientific principles of monetary policy are ones that I discussed in a paper that I wrote just before the crisis
began, which was presented at a conference at the Bundesbank in September 2007 (Mishkin, 2009). Here I just list the prin-
ciples: more detail of the research behind them is discussed in Mishkin (2009).

This paper is based on a lecture that I presented at the 20th International Conference on Macroeconomic Analysis, University of Crete, Rethymno, Crete,
on May 28, 2016. The views expressed here are my own, are not necessarily those of Columbia University or the National Bureau of Economic Research.
Disclosure of my outside compensated activities can be found on my website at http://www0.gsb.columbia.edu/faculty/fmishkin/.
⇑ Address: Graduate School of Business, Columbia University, United States.
E-mail address: fsm3@gsb.columbia.edu

0261-5606/Ó 2017 Elsevier Ltd. All rights reserved.
F.S. Mishkin / Journal of International Money and Finance 73 (2017) 252–274 253

1. Inflation is always and everywhere a monetary phenomenon. I interpret this principle, which derives from Milton
Friedman’s (1963) famous quote as not implying that money growth is the most informative piece of information about
inflation, but rather that the ultimate source of inflation is overly expansionary monetary policy
2. Price stability has important benefits. This principle with the one above it, then implies that central banks have the ability
to control inflation and should keep it low and stable.
3. There is no long-run tradeoff between unemployment and inflation. This principle implies that monetary policy should not
try to achieve lower unemployment rates by aiming for a higher inflation rate.
4. Expectations play a crucial role in the macroeconomy. This principle, which came out of the rational expectations revolution
that took root in the 1970s, implies that the management of expectations about future policy is a central element of mon-
etary policy.
5. The Taylor Principle is necessary for price stability. This principle indicates that inflation will be stable only if monetary pol-
icy raises the nominal interest rate by more than the rise in inflation, so that real interest rates rise in response to a rise in
6. The time-inconsistency problem is relevant to monetary policy. The time-inconsistency problem arises if monetary policy
conducted on a discretionary, day-by-day basis, so that without a commitment mechanism, monetary policymakers
may find themselves unable to consistently follow an optimal plan over time; specifically they may find it tempting to
exploit a short-run Phillips curve tradeoff between inflation and employment, which given priniciple 3 will result only
in higher inflation with no long-run reduction in unemployment.
7. Central bank independence improves macroeconomic performance. Central bank independence can help insulate central
banks from political pressure for them to pursue overly expansionary monetary policy, thereby exacerbating the time-
inconsistency problem. Indeed, empirical evidence finds that macroeconomic performance improves when central banks
are more independent
8. Credible commitment to a nominal anchor promotes price and output stability. The inability of monetary policy to boost
employment in the long run, the importance of expectations, the benefits of price stability, and the time-inconsistency
problem are the reasons why a credible commitment to a nominal anchor – i.e. the stabilisation of a nominal variable
such as the inflation rate – is crucial to stabilising long-run inflation expectations, thereby promoting price and output
9. Financial frictions play an important role in the business cycle. When shocks to the financial system increase information
asymmetry and thereby dramatically increase financial frictions, it gives rise to financial instability, and with the financial
system no longer being able to channel funds to those with productive investment opportunities, the economy experi-
ences a severe economic downturn.

The first eight of these principles are elements of what has been dubbed the ‘‘new neoclassical synthesis” (Goodfriend and
King, 1997), and before the crisis almost all academic economists and central bankers agreed with them. The monetary pol-
icy strategy that follows from the eight principles of the new neoclassical synthesis is referred to in the academic literature
as ‘‘flexible inflation targeting” (Svensson, 1997). It involves a strong, credible commitment by the central bank to stabilising
inflation in the long run, often at an explicit numerical level, but also allows for the central bank to pursue policies aimed at
stabilising output around its natural rate level in the short run.
The ninth principle – that financial frictions play an important role in business cycles – was not as well understood by
academic economists before the crisis although there were exceptions. However, many officials in central banks did under-
stand this principle, yet it was not explicitly a feature of models used for policy analysis in central banks.

2.2. Theory of optimal monetary policy

Before the crisis, academic economists and central bankers had developed a theory of optimal monetary policy, which
starts by specifying an objective function that represents economic welfare, that is, the well-being of households in the econ-
omy, and then maximises this objective function, subject to constraints provided by a model of the economy, typically a
dynamic stochastic general equilibrium (DSGE) model. Both the objective function and the model of the economy were
based on the principles of the new neoclassical synthesis.

2.2.1. Objective function

Standard descriptions of the central bank’s objective function have been expressed in terms of two components (e.g.
Svensson, 1997; Clarida et al., 1999; Woodford, 2003). The benefits of price stability (principle 2) are reflected in the first
component, which involves minimising the deviations of inflation from its optimal rate, which most central bankers take
to be around the 2% level. The second component reflects the costs of underutilised resources in the economy and involves
minimising the deviations of real economic activity from its natural rate level, which is the efficient level determined by the
productive potential of the economy. Because expectations about the future play a central role in the determination of infla-
tion and in the transmission mechanism of monetary policy (principle 4), in order to achieve an optimal monetary policy the
intertemporal nature of economic welfare must be taken into account, and so the objective function includes both the pre-
sent state of the economy and the expected path in future periods. Given that inflation is a monetary phenomenon and is
254 F.S. Mishkin / Journal of International Money and Finance 73 (2017) 252–274

thus viewed as controllable by monetary policy (principle 1), the central bank sets its policy instruments (under normal cir-
cumstances, a short-term interest rate) to maximise the objective function, subject to the constraints.

2.2.2. Constraints: the DSGE model

The constraints, as embodied in macroeconometric models in use at central banks before the crisis, also reflect the prin-
ciples of the new neoclassical synthesis. These models display no long-run tradeoff between unemployment and inflation
(principle 3). Expectations play a central role in household and business behaviour (principle 4) and lead to the existence
of the time-inconsistency problem (principle 5). The models also display the importance of a credible commitment to a
strong nominal anchor in order to produce good monetary policy outcomes (principle 8), and this requires an independent
central bank (principle 7). Because the transmission of monetary policy to the economy operates through the real interest
rate, real interest rates have to rise in order to stabilize inflation (Taylor principle 5).
The approach to analysing optimal monetary policy used by central banks had an additional important feature: it made
use of a linear-quadratic (LQ) framework, in which the equations describing the dynamic behaviour of the economy are linear
– a basic feature of DSGE models – and the objective function specifying the goals of policy is quadratic. For example, the
objective function was characterised as a loss function comprising the squared value of the inflation gap (that is, actual infla-
tion minus desired inflation) and the squared value of the output gap (that is, actual output minus potential output).
The models also contained another additional feature: a representative-agent framework in which all agents are alike,
thereby precluding the presence of financial frictions as the latter require agents to differ, particularly in the amount of infor-
mation they have. With asymmetric information ruled out, the financial sector has no special role to play in economic fluc-
tuations. Thus, although central bankers were aware of principle 9, i.e. that financial frictions could have an important effect
on economic activity, financial frictions were not a key feature in the macroeconometric models used in central banks and
were not an element of the pre-crisis theory of optimal monetary policy.
Even before the crisis, most central bankers understood that financial disruptions could be very damaging to the econ-
omy, and this explains the extraordinary actions that central banks took during the crisis to shore up financial markets
(Mishkin, 2011). However, the macroeconomic models used for forecasting and policy analysis, whether they were dynamic
stochastic general equilibrium (DSGE) models or more traditional macroeconometric models such as FRBUS, which is used at
the Federal Reserve, did not allow for the impact of financial frictions and disruptions on economic activity.
Under the assumptions of the linear-quadratic framework, the optimal policy is certainty equivalent: it can be charac-
terised by a linear time-invariant response to each shock, and the magnitude of these responses does not depend on the vari-
ances or on any other aspect of the probability distribution of the shocks. In such an environment, optimal monetary policy
does not focus on tail risk, which might require risk management. Furthermore, when financial market participants and
wage and price setters are relatively forward-looking, the optimal policy under commitment is characterised by considerable
inertia, which is commonly referred to as gradualism.1
Indeed, in the United States, as well as in many other industrial economies, the actual course of monetary policy before
the crisis was typically very smooth. For example, the Federal Reserve usually adjusted the federal funds rate in increments
of 25 or 50 basis points (that is, ¼ or ½ percentage point) and sharp reversals in the funds rate path were rare. Numerous
empirical studies have characterised monetary policy before the crisis using Taylor-style rules, in which the policy rate
responds to the inflation gap and the output gap; these studies have generally found that the fit of the regression equation
is improved by including a lagged interest rate that reflects the smoothness of the typical adjustment pattern.2
Although in many ways central banks conducted monetary policy under a certainty equivalence strategy, central bankers
were not been completely comfortable with this approach to monetary policy. While a linear-quadratic framework may pro-
vide a reasonable approximation of how optimal monetary policy operates under fairly normal circumstances, this approach
is less likely to be adequate for the consideration of monetary policy when there is a risk, however small, of particularly poor
economic performance. First, the dynamic behaviour of the economy may well exhibit nonlinearities, at least in response to
some shocks (Hamilton, 1989; Kim and Nelson, 1999; Kim et al., 2005). Furthermore, the use of a quadratic objective func-
tion does not reflect the extent to which most individuals have a strong preference for minimising the incidence of worst-
case scenarios. Therefore, given the central bank’s ultimate goal of maximising public welfare, there is a case to be made for
monetary policy to reflect the public’s preference of avoiding particularly adverse economic outcomes.
Their discomfort with a certainty equivalence approach to monetary policy led central bankers to exposit a ‘‘risk manage-
ment” approach to the conduct of monetary policy, even before the crisis. Alan Greenspan indeed described his thinking
about monetary policy as exactly such an approach (Greenspan, 2003), although he was not very explicit about what this
meant. However, it is clear that even before the crisis, central bankers were aware that they had to worry about risks of very
bad economic outcomes. Specifically, they were aware that in some circumstances the shocks hitting the economy might
exhibit excess kurtosis, commonly referred to as ‘‘tail risk”, in which the probability of relatively large disturbances is higher
than would be implied by a Gaussian distribution.

The now-classic reference on this approach is Woodford (2003). Also see Goodfriend and King (1997), Rotemberg and Woodford (1997), Clarida et al.
(1999), King and Wolman (1999), Erceg et al. (2000), Benigno and Woodford (2003), Giannoni and Woodford (2005), Levin et al. (2005), and Schmitt-Grohé and
Uribe (2005).
See Clarida et al. (1998, 1999), Sack (2000), English et al. (2003), Smets and Wouters (2003), and Levin et al. (2005). Further discussion can be found in
Bernanke (2004).
F.S. Mishkin / Journal of International Money and Finance 73 (2017) 252–274 255

3. Lessons from the crisis for the science of monetary policy

From my reading of the crisis, there are five lessons that should change how we think about the science of monetary pol-
icy and monetary policy strategy.

1. Developments in the financial sector have a far greater impact on economic activity than we previously realised.

The global financial crisis of 2007–09, and the worldwide recession, the most severe since the Great Depression, that
accompanied it, demonstrated that financial frictions should be front and centre in macroeconomic analysis: they could
no longer be ignored in the macroeconometric models that central banks used for forecasting and policy analysis, as we
saw was the case before the crisis.

2. The macroeconomy is highly nonlinear.

Because economic downturns typically result in even greater uncertainty about asset values, such episodes may involve
an adverse feedback loop whereby financial disruptions cause investment and consumer spending to decline, which, in turn,
causes economic activity to contract. Such contraction then increases uncertainty about the value of assets, and, as a result,
the financial disruption worsens. In turn, this development causes economic activity to contract further, in a perverse cycle.
The result is that the macroeconomy can at times be highly nonlinear.

3. The zero lower bound is a far more serious problem than we realised.

The constraint that policy interest rates cannot be driven much below zero means that conventional expansionary mon-
etary policy becomes ineffective when a sufficiently negative shock hits the economy, so a negative policy rate would be
needed to stimulate the economy. This has become known as the zero-lower-bound problem. 3 In this situation, central banks
need to resort to nonconventional monetary policy measures such as large-scale asset purchases to stimulate the economy.
Research before the crisis took the view that as long as the inflation objective was around 2%, then the zero-lower- bound con-
straint on policy interest rates bind infrequently and is likely to be short-lived (Reifschneider and Williams, 2000; Coenen et al.,
Events since the beginning of the global financial crisis have thoroughly discredited this view. Not only does the zero-
lower-bound problem occur far more frequently than this research suggested, but it can also be long lived. For example,
the Federal Reserve has had to resort to nonconventional monetary policy rate twice in the last ten years (2003–2004
and starting in 2008) and kept the federal funds rate at the zero lower bound for seven years, until it raised the federal funds
rate target by 25 basis points in December of 2015. Indeed, in Europe and Japan, the zero-lower-bound constraint is still
binding, with both the ECB and the Bank of Japan even resorting to a negative interest rate policy of charging banks for keep-
ing deposits at the central bank.
The flaw with this past research is that it was conducted with models that were essentially linear and yet the global finan-
cial crisis revealed that the economy is likely to be very nonlinear (see Mishkin, 2011). The second reason why the zero-
lower-bound problem is more serious than previously thought is that we now recognize that contractionary shocks from
financial disruptions can be far greater than previously anticipated. Sufficiently large contractionary shocks therefore result
in the zero lower bound constraint occurring more frequently. The zero lower bound on policy rates has therefore become of
much greater relevance to central banks than was anticipated before the recent financial crisis.
Before the global financial crisis, economists believed that even if the zero-lower-bound constraint was reached, mone-
tary policy tools would still be effective with the use of nonconventional monetary policy tools. These nonconventional mon-
etary policy tools—such as large-scale asset purchases to lower risk and term premiums, forward guidance about the future
policy rate so that it would be viewed as staying low for an extended period, and exchange-rate interventions to lower the
value of the domestic currency—would be able to take the place of conventional monetary policy to provide sufficient stim-
ulus to the economy (e.g., see Svensson, 2001; Bernanke, 2004). Although there is research showing that nonconventional
monetary policy does work to stimulate the economy (e.g., see the survey in Williams, 2014), the fact still remains that cen-
tral banks throughout the world have struggled to return their economies to full employment or to get inflation to rise to
their 2% inflation targets.

4. The cost of cleaning up after financial crises is very high.

Besides the obvious cost of a huge loss of aggregate output as a result of the worldwide recession, the global financial
crisis suggests that there are likely to be additional costs that raise the total cost far higher. Here we look at two: (1) financial
crises are typically followed by very slow growth and (2) the budgetary position of governments may sharply deteriorate;

With the introduction of negative interest rates on banks’ deposits held at central banks, and the ability of some central banks to move policy rates below
zero, there has been some slippage in the term ‘‘zero lower bound.” We use this term to identify the unusual situation in which policy rates have been moved to
zero or below.
256 F.S. Mishkin / Journal of International Money and Finance 73 (2017) 252–274

When economies experience deep recessions, typically they subsequently experience very strong recoveries, often
referred to as V-shaped recoveries. However, as Reinhart and Reinhart (2010) document, this V-shaped pattern is not char-
acteristic of recessions that follow financial crises because the deleveraging process takes a long time, resulting in strong
headwinds for the economy. When analysing 15 severe post-World War II financial crises, as well as the Great Depression,
the 1973 oil shock period and the recent crisis, they find that real GDP growth rates were significantly lower during the dec-
ade following each of these episodes, with the median decline in GDP growth being about 1%. Furthermore, unemployment
rates stay persistently higher for a decade after crisis episodes, with the median unemployment rate 5 percentage points
higher in advanced economies. Although we have many years to go until a decade has passed following the most recent cri-
sis, it actually looks like it might have worse outcomes than the average crisis episode studied by Reinhart and Reinhart. They
find that 82% of the observations of per capita GDP during the period 2008 to 2010 remain below or equal to the 2007 level,
while the comparable number for the fifteen earlier crisis episodes is 60%. We now recognize that the cumulative output
losses from financial crises are massive, and the current crisis looks like it will be no exception.

5. Price and output stability do not ensure financial stability.

Before the recent financial crisis, the common view, both in academia and in central banks, was that achieving price and
output stability would promote financial stability. This was supported by research (Bernanke et al., 1999; Bernanke and
Gertler, 2001) indicating that monetary policy which optimally stabilises inflation and output is likely to stabilize asset
prices, making asset price bubbles less likely. Indeed, central banks’ success in stabilising inflation and the decreased volatil-
ity of business cycle fluctuations, which became known as the Great Moderation, made policy-makers complacent about the
risks from financial disruptions.
The benign economic environment leading up to 2007, however, surely did not protect the economy from financial insta-
bility. Indeed, it may have promoted it. The low volatility of both inflation and output fluctuations may have lulled market
participants into thinking there was less risk in the economic system than was really the case. Credit risk premiums fell to
very low levels and underwriting standards for loans dropped considerably. Some recent theoretical research even suggests
that benign economic environments may promote excessive risk-taking and may actually make the financial system more
fragile (Gambacorta, 2009). Although price and output stability are surely beneficial, the recent crisis indicates that a policy
focused solely on these objectives may not be enough to produce good economic outcomes.

6. World financial markets have become more interlinked and can have very large impacts on the domestic economy.

The financial crisis that started in August 2007 was truly global in nature. The first disruption to the credit markets at the
beginning of the crisis occurred in Europe, not in the United States. On August 7, 2007, the French bank BNP Paribas sus-
pended redemption of shares held in some of its money market funds on August 7, 2007. When this was announced there
was an immediate seizing up that the interbank lending market throughout the world. This was reflected in the so-called
TED spread (the spread between the LIBOR interest rate on three-month Eurodollar deposits and the interest rate on
three-month U.S. Treasury bills). This spread provides an assessment of counterparty risk from one bank lending to another,
reflecting both liquidity and credit risk concerns. It surged from 40 basis points (0.40 percentage points) before August 7 to
240 basis points by August 29, before abating somewhat because of central bank actions.
Although the start of the financial crisis is associated with a development in a European financial institution, the source of
the problems at BNP Paribas was the U.S. credit markets. A boom in U.S. housing prices peaked around 2005. As housing
prices started to decline, mortgage-backed financial securities—in many cases, securities based on subprime residential
mortgages but then divided into more senior claims that were supposedly safe and junior claims that were recognized to
be risky—began to experience huge losses. The resulting problems in U.S. credit markets then spilled over to BNP Paribas,
triggering the disruption in the interbank lending market.
Until September 2008, the financial crisis was mostly confined to the United States, with U.S. credit markets suffering
more disruptions, requiring bailouts of financial instituions such as Bear Stearns, and it was the U.S. economy that contracted
first. However, with the failure of Lehaman Brothers in September 2008, the financial crisis became truly global, with finan-
cial markets throughout the world unable to perform their critical function of channeling funds from savers to individuals
and firms with productive investment opportunities. Then not only did the United States experience a severe economic con-
traction, but this now occurred throughout the world.
The global financial crisis thus has demonstrated how interlinked world financial markets have become. The seeds of the
crisis started in U.S. credit markets, then spilled over to a French bank, then bounced back to the United States, and even-
tually produced a severe disruption in credit markets throughout the world. The disruption in credit markets worldwide
then led to the most severe, worldwide economic contraction since the Great Depression.

7. Financial crises often lead to fiscal crises

As pointed out by Reinhart and Rogoff (2009), in the aftermath of financial crises there is almost always a sharp increase
in government indebtedness. We have seen this exact situation in the aftermath of the current crisis. The massive bailouts of
financial institutions, fiscal stimulus packages, and the sharp economic contractions leading to reductions in tax revenue that
F.S. Mishkin / Journal of International Money and Finance 73 (2017) 252–274 257

occurred throughout the world have adversely affected the fiscal situation in many countries. Budget deficits of over 10% of
GDP in advanced countries like the United States became common, and even countries that prior to the crisis, such as Ireland
and Spain, which were held up as paragons of fiscal rectitude because their governments were rapidly reducing the amount
of government debt to GDP, have found themselves in dire financial straits, with exploding debt-to- GDP ratios. Furthermore,
this rise in indebtedness has the potential to lead to sovereign debt defaults, which has become a huge concern in Europe and
still has the potential to cause the demise of the euro, and could even threaten the existence of the European Union if default
on their sovereign debt leads to countries being forced to leave the EU.

3.1. Implications for the science of monetary policy

How much of the science of monetary policy needs to be altered given the lessons from the financial crisis outlined
above? Pundits, such as Paul Krugman (2009) and the Economist Magazine (2009), argued that the financial crisis has
revealed deep flaws in the modern field of macro/monetary economics developed over the last forty or so years and that this
field needs to be completely overhauled.4 I strongly disagree with this assessment. None of the lessons from the financial crisis
in any way undermine or invalidate the nine basic principles of the science of monetary policy developed before the crisis.
Each of the seven lessons from the crisis is completely orthogonal to the theory or empirical work that supports the eight
principles of the new neoclassical synthesis. The lessons in no way weaken the case for any of these principles. The above
conclusion is an extremely important one (and this is why I boldfaced and italicised it to make it stand out). It tells us that
we should not throw out all that we have learned in the field of macro/monetary economics over the last forty years, as some
pundits seem to suggest. Rather, much of the edifice of the science of monetary policy is clearly still as valid today as it was
before the crisis. As we shall see, this has important implications for how we view monetary policy. However, the lesson that
developments in the financial sector can have a large impact on economic activity indicates not only that the ninth principle
about financial frictions is of course valid, but also that it is now even more important than academic economists and central
bankers previously realised.
On the other hand, the lessons from the crisis do undermine two key elements of the pre-crisis theory of optimal mon-
etary policy. The lesson that the macroeconomy is inherently nonlinear undermines the linear-quadratic framework that is a
key element of that policy. The lesson that the developments in the financial sector can have a major impact on economic
activity undermines the representative-agent framework, another key element of the pre-crisis theory of optimal monetary
policy. Doubts about the linear-quadratic and representative-agent frameworks that have arisen because of the financial cri-
sis also have important implications for the conduct of monetary policy.

4. How should we rethink monetary policy?

With an understanding of which areas of the science of monetary policy need to be altered, we can examine what features
of monetary policy should be rethought.

4.1. Flexible inflation targeting

I have referred to the monetary policy strategy that follows from the eight principles of the new neoclassical synthesis as
flexible inflation targeting, for want of a better name. Since, as I have argued here, none of the principles are invalidated by
the events of the recent financial crisis, this approach to monetary policy strategy is still equally valid. The arguments sup-
porting central bank adherence to the principles of the new neoclassical synthesis are still every bit as strong as they were
before the crisis. Therefore, there is still strong support for central banks having a strong, credible commitment to stabilising
inflation in the long run by announcing an explicit, numerical inflation objective, but also having the flexibility to pursue
policies aimed at stabilising output around its natural rate level in the short run.
Although the support for the flexible inflation targeting framework is not weakened by the lessons from the financial cri-
sis, the lessons do suggest that the details of how flexible inflation targeting is conducted, and of what is meant by flexibility,
need to be rethought. Let us first look at two possible basic modifications to the flexible inflation targeting framework: the
choice of the level of the inflation target, and whether some form of history-dependent targeting would produce better eco-
nomic outcomes.

4.1.1. Level of the inflation target

The lesson that the zero-lower-bound problem is more serious than previously thought raises the question of whether the
level of the long-run inflation target should be raises from the typical value of around 2%. With a higher long-run inflation
target, the zero-lower-bound constraint would be less likely to occur, and the real interest rate can be driven down to lower
levels in the face of adverse aggregate demand shocks. Prominent economists, such as Olivier Blanchard, Paul Krugman and
Lawrence Ball, have suggested that the inflation target be raised from the 2% to the 4% level.5 With expectations of inflation

See Lucas (2009) and Cochrane (2009) for spirited replies to both The Economist (2009) and Krugman (2009) articles.
E.g., see Blanchard et al. (2010), Krugman (2009), and Ball (2014),
258 F.S. Mishkin / Journal of International Money and Finance 73 (2017) 252–274

anchored to this long-run target, when the nominal interest rate is lowered to zero, the real interest rate would be lowered to as
low as 4%, rather than 2% with the 2% inflation target. Conventional monetary policy, which involves setting the nominal
interest rate, would then be able to ease monetary policy to a greater extent than it could with the lower long-run inflation
target. Another way of stating this is to say that the zero lower bound on the policy rate would be less binding with a higher
long-run inflation target.
Although the logic of this argument for a higher inflation target is correct, I think that the answer to the question, ‘‘Should
the long-run inflation target be raised to above 2%” is No. We have to look not only the benefits of a higher inflation target,
but also the costs. If it were no more difficult to stabilize the inflation rate at a 4% level than at a 2% level, then the case for
raising the inflation target to 4% would be much stronger. However, the history of the inflation process suggests that this is
not the case. Inflation rates that accord with the Greenspan definition of price stability,6 i.e., ‘‘the state in which expected
changes in the price level do not effectively alter business or household decisions”, seem to be below the 3% level. Once inflation
start to rise above this level, the public is likely to believe that price stability is no longer a credible goal of the central bank and
then the question arises,” if a 4% level of inflation is OK, then why not 6%, or 8%, and so on.
This was the experience in the United States from the 1960s to the 1980s. At the beginning of the 1960s, the inflation rate
was below 2% and policymakers believed that they could lower the unemployment rate if they were willing to tolerate infla-
tion rates in the 4–5% range. However, when inflation rate began to rise above the 3% level, it kept on rising, leading to the so-
called Great Inflation period. Getting inflation back down to the 2% level was then very costly. No central banker wants to go
through that again. Indeed, one of the great successes of central banks in the last twenty years is the anchoring of inflation
expectations to around the 2% level. Raising the inflation target to 4% could jeopardize this hard-won success, with the result
that there no longer would be a credible nominal anchor so crucial to health of the economy.
A second argument against raising the long-run inflation target is that although raising the target might have benefits in
the short-run, the costs of higher inflation in terms of the distortions it produces in the economy are ongoing. Thus, although
they may not be large in any given year, these costs add up, and in present value terms might outweigh the intermittent
benefits obtained from the zero lower bound not being binding in periods such as those we have recently experienced.

4.1.2. History-dependent targeting

A traditional inflation targeting regime treats bygones as bygones and so tries to achieve the inflation target, say 2%, no
matter what has happened in the past. Woodford (2003) has provided a compelling theoretical argument that monetary pol-
icy should, in contrast, be history-dependent, that is, if the inflation target has been undershot in the recent past, monetary
policy should strive to overshoot it in the near future. Researchers such as Svensson (1999), Ditmar et al. (1999, 2000), Vestin
(2000, 2006) and Woodford (2003) have shown that a price-level target, which displays this type of history-dependence, pro-
duces less output variance than an inflation target. The reasoning is straightforward. A negative demand shock that results in
the price level falling below its target path, say a 2% growth path, requires monetary policy to try to raise the price level back
to its 2% target growth path, so that inflation will temporarily rise above 2%. The rise in expected inflation then lowers the
real interest rate, thereby stimulating aggregate demand and economic activity. Hence a history-dependent price-level tar-
get is an automatic stabilizer: a negative demand shock leads to stabilising expectations, which stabilize the economy. The
mechanism is even more effective when the negative demand shock is so large that the zero lower bound on interest rates
becomes binding, as Eggertsson and Woodford (2003) point out.
Another history-dependent policy that is quite similar to a price-level target is a nominal GDP target. Eggertsson and
Woodford (2003, 2004) argue for a target criterion of an output-adjusted price level which is the log of a price index plus
the output gap multiplied by a coefficient (which reflects the relative weight on the output gap versus inflation stabilization).
Because this concept of an ‘‘output-gap adjusted price level” might be hard for the public to understand, Woodford (2012)
suggests that a simpler criterion that would work nearly as well would have the target criterion be a nominal GDP path
which grows as the inflation target (e.g. 2%) multiplied by the growth rate of potential GDP. (If potential GDP growth was
estimated to be at a 2% annual rate, this would imply a growth rate of the nominal GDP path at a 4% rate.)
There are formidable challenges to adoption of either a price-level or a nominal-GDP target. First it is more difficult to
explain to the public and financial market participants that the central bank is aiming to hit a price-level or nominal GDP
path where the actual level of the price level or nominal GDP is changing over time. Targeting a level of inflation such as
2% is much more straightforward because this 2% number is kept constant. Second, when inflation temporarily rises above
2%, as the central bank intends, the central bank needs to make sure that the public understands that it is not weakening its
commitment to the long-run 2% inflation target. A nominal GDP target has an additional difficulty because it requires that
the central bank take a stance on the number for the growth rate of potential GDP, a number on which there is a great deal of
uncertainty. This problem would be particularly severe if the central bank ignored what was actually happening to inflation
in estimating potential GDP and the output gap, a mistake that the Federal Reserve made in the 1970s (e.g., see Orphanides,
The challenges described above help explain why central banks have not adopted either a price-level or a nominal-GDP
target. However, there is a way to skin the cat to obtain the benefits of a history-dependent monetary policy with an

Greenspan apparently first expressed this definition in the July 1996 FOMC meeting (page 51 of the transcript, which can be found at http://www.
federalreserve.gov/monetarypolicy/files/FOMC19960703meeting.pdf). This definition was later made public in numerous speeches.
F.S. Mishkin / Journal of International Money and Finance 73 (2017) 252–274 259

approach that can be readily explained to the public and the markets. This approach involves indicating that the 2% inflation
target should be for an average over a particular period rather than for a particular future date, such as two years ahead. This
modification is one that would make the inflation target history dependent and yet would be easy to explain. If the inflation
had been running at a rate of 1.5% for several years, then the central bank would explain that to meet the 2% inflation target
on average, it would have to shoot for an inflation rate of 2.5% for several years. However, this is no way weakens the com-
mitment to the 2% long-run inflation objective. This policy would be particularly effective when the zero-lower-bound con-
straint is binding because the higher inflation expectation of 2.5% would lower the real interest rate, thereby providing more
stimulus to the economy. This modification to the inflation target would also have the benefit of encouraging a central bank
to actually pursue more expansionary monetary policy in the face of negative aggregate demand shocks.
This modification to the inflation target is not a theoretical curiosity. Indeed, it has been adopted by the Reserve Bank of
Australia when, starting in the mid- 1990s, it used the following language to describe its inflation target: ‘‘The Governor and
the Treasurer have agreed that the appropriate target for monetary policy in Australia is to achieve an inflation rate of 2–3%,
on average [my italics], over the cycle.” With this type of inflation target, Australia has arguably had the best monetary policy
outcomes of any advanced economy in the world, with an average inflation rate since 1995 of 2.7%, which is very close to the
2.5% midpoint of its inflation target range, while the Australian economy has not had a recession in over twenty-five years.
(Of course, luck and other policies may have played an important role in producing these excellent outcomes.)
The empirical case for the benefits of a history-dependent inflation target in which the central banking aims to overshoot
the 2% inflation target after undershoots has occurred is provided by Curdia (2016). This paper conducts an exercise asking
what would be the evolution of the U.S. economy starting in 2016 if monetary policy was based on optimal control, in which
the policy rate is set to maximize an objective function in which inflation is stabilized around 2% and output is stabilized
around potential output, while avoiding excessive interest rate volatility. Under this optimal control policy, the inflation rate
rises as much as 0.4 percentage points above the 2% target and stays above 2% for five years. The paper thus provides empir-
ical support for a history-dependent monetary policy that overshoots the inflation target of 2% temporarily after the inflation
rate has been below 2% for a number of years. Note that this policy is exactly what would transpire if the central bank com-
mitted to an inflation target that is for an average over a particular period, which could either be over the cycle as has been
adopted by the Reserve Bank of Australia, or alternatively could be for a particular period, say a moving average over ten

4.2. How should monetary policy respond to asset-price bubbles?

One active debate in central banks before the crisis focused on how central banks should respond to potential asset price
bubbles. Because asset prices are a central element in the transmission mechanisms of monetary policy, the theory of opti-
mal monetary policy requires that monetary policy responds to asset prices in order to obtain good outcomes in terms of
inflation and output. Hence, the issue of how monetary policy might respond to asset price movements is not whether it
should respond at all, but whether it should respond at a level over and above that called for in terms of the objectives of
stabilising inflation and employment. Another way of defining the issue is whether monetary policy should try to pop, or
slow, the growth of possibly-developing asset price bubbles in order to minimise damage to the economy when these bub-
bles burst. Alternatively, rather than responding directly to possible asset price bubbles, should the monetary authorities
respond to asset price declines only after a bubble bursts, to stabilize both output and inflation? These opposing positions
have been characterised as leaning against asset price bubbles versus cleaning up after the bubble bursts, and so the debate
over what to do about asset price bubbles has been labelled the ‘‘lean versus clean” debate.
Even before the crisis, there was no question that asset price bubbles have negative effects on the economy. As Dupor
(2005) emphasised, the departure of asset prices from fundamentals can lead to inappropriate investments that decrease
the efficiency of the economy. Furthermore, throughout history the bursting of bubbles has been followed by sharp declines
in economic activity, as Kindleberger’s (1978) famous book demonstrated.
The clear-cut dangers of asset price bubbles led some economists – both inside and outside central banks, for example
Cecchetti et al. (2000), Borio and Lowe (2002), Borio et al. (2003), and White (2004) – to argue that central banks should
at times ‘‘lean against the wind” by raising interest rates to stop bubbles from getting out of hand. They argued that raising
interest rates to slow a bubble’s growth would produce better outcomes because it would either prevent the bubble or would
result in a less severe bursting of the bubble, with far less damage to the economy.
The opposing view to the ‘‘leaning against the wind” view that asset prices should have a special role in the conduct of
monetary policy, over and above that implied by their foreseeable effect on inflation and employment, is often referred to as
the ‘‘Greenspan doctrine”, because, when Chairman of the Federal Reserve Board, he strenuously argued that monetary pol-
icy should not try to lean against asset price bubbles, but rather should just clean up after they burst (Greenspan (2002)).7
There were several elements to this argument.
First, bubbles are hard to detect. In order to justify leaning against a bubble, a central bank must assume that it can iden-
tify a bubble in progress. That assumption was viewed as highly dubious because it is hard to believe that the central bank
has such an informational advantage over private markets. If the central bank has no informational advantage, and if it

I was also a proponent of this view (Mishkin, 2001, 2007).
260 F.S. Mishkin / Journal of International Money and Finance 73 (2017) 252–274

knows that a bubble has developed, the market will almost surely know this too, and the bubble will burst. Thus, any bubble
that can be identified with certainty by the central bank would be unlikely ever to develop much further.
A second objection to leaning against bubbles was that raising interest rates may be very ineffective in restraining the
bubble, given that market participants expect such high rates of return from buying bubble-driven assets.8
A third objection was that there are many asset prices, and at any one time a bubble may be present in only a fraction of
assets. Monetary policy actions are a very blunt instrument in such a case, as such actions are likely to affect asset prices in
general, rather than solely those in a bubble.
Fourth, although some theoretical models suggested that raising interest rates could riskdiminish the acceleration of asset
prices, others suggested that raising interest rates could cause a bubble to burst more severely, thus doing even more dam-
age to the economy (Bernanke et al., 1999; Greenspan, 2002; Gruen et al., 2005; Kohn, 2006). This view was supported by
historical examples, such as the monetary tightening that occurred in 1928 and 1929 in the United States and in 1989 in
Japan, suggesting that raising interest rates may cause a bubble to burst more severely, thereby increasing the damage to
the economy.9 Another way of saying this is that bubbles are departures from normal behaviour, and it is unrealistic to expect
that the usual tools of monetary policy will be effective in abnormal conditions. Attempts to prick bubbles were thus viewed as
possibly violating the Hippocratic oath of ‘‘do no harm”.
Fifth and particularly important was the view discussed above that cleaning up after a bubble bursts is relatively easy
because the monetary authorities have the tools to keep the harmful effects of a bursting bubble at a manageable level.
Taking all these objections together leads to the conclusion that the cost of leaning against asset price bubbles was likely
to be high, while the cost of bursting bubbles could be kept low. Rather than advocating leaning against bubbles, the view
supported an approach in which central banks just clean up after the bubble. This approach was fully consistent with mon-
etary policy focusing on stabilising inflation and employment without a special focus on asset price bubbles.
The Greenspan doctrine, which was strongly supported by Federal Reserve officials, held great sway in the central bank-
ing world before the crisis. However, the lesson from the crisi that the cost of cleaning up after financial crises is very high
undermines one of the key linchpins of the argument for the Greenspan doctrine, that the cost of cleaning up after an asset-
price bubble would be low. The lean versus clean debate initially had a lot of its focus on whether monetary policy should
react to potential asset-price bubbles. However, given the interaction between the housing-price bubble and credit markets
in the run up to the global financial crisis, there is now a recognition that we need to distinguish between two different types
of asset-price bubbles.

4.2.1. Two types of asset-price bubbles

As pointed out in Mishkin (2010a), not all asset price bubbles are alike. Financial history and the financial crisis of 2007–
2009 indicates that one type of bubble, which is best referred to as a credit-driven bubble, can be highly dangerous. With this
type of bubble, there is the following typical chain of events: Because of either exuberant expectations about economic pro-
spects or structural changes in financial markets, a credit boom begins, increasing the demand for some assets and thereby
raising their prices. The rise in asset values, in turn, encourages further lending against these assets, increasing demand, and
hence their prices, even more. This feedback loop can generate a bubble, and the bubble can cause credit standards to ease as
lenders become less concerned about the ability of the borrowers to repay loans and instead rely on further appreciation of
the asset to shield themselves from losses.
At some point, however, the bubble bursts. The collapse in asset prices then leads to a reversal of the feedback loop in
which loans go sour, lenders cut back on credit supply, the demand for the assets declines further, and prices drop even
more. The resulting loan losses and declines in asset prices erode the balance sheets at financial institutions, further dimin-
ishing credit and investment across a broad range of assets. The decline in lending depresses business and household spend-
ing, which weakens economic activity and increases macroeconomic risk in credit markets. In the extreme, the interaction
between asset prices and the health of financial institutions following the collapse of an asset price bubble can endanger the
operation of the financial system as a whole.
However, there is a second type of bubble that is far less dangerous, which can be referred to as an irrational exuberance
bubble. This type of bubble is driven solely by overly optimistic expectations and poses much less risk to the financial system
than credit-driven bubbles. For example, the bubble in technology stocks in the late 1990s was not fueled by a feedback loop
between bank lending and rising equity values and so the bursting of the tech-stock bubble was not accompanied by a
marked deterioration in bank balance sheets. The bursting of the tech-stock bubble thus did not have a very severe impact
on the economy and the recession that followed was quite mild.

4.2.2. The case for leaning versus cleaning

The recent crisis has clearly demonstrated that the bursting of credit-driven bubbles not only can be extremely costly, but
are very hard to clean up after. Furthermore bubbles of this type can occur even if there is price and output stability in the
period leading up to them. Indeed, price and output stability might actually encourage credit-driven bubbles because it leads

For example, see the discussion in Greenspan (2002).
For example, see Gruen et al. (2005), Hamilton (1989), Cargill et al. (1995), Jinushi et al. (2000) and Posen (2003).
F.S. Mishkin / Journal of International Money and Finance 73 (2017) 252–274 261

market participants to underestimate the amount of risk in the economy. The case for leaning against potential bubbles
rather than cleaning up afterwards has therefore become much stronger.
However, the distinction between the two types of bubbles, one which (credit-driven) is much more costly than the other,
suggests that the lean versus clean debate may have been miscast, as White (2009) indicates. Rather than leaning against
potential asset-price bubbles, which would include both credit-driven and irrational exuberance type bubbles, there is a
much stronger case for leaning against credit bubbles which would involve leaning against credit-driven bubbles, but not
irrational exuberance bubbles. As White (2009) and Mishkin (2010b) have pointed out, it is much easier to identify credit
bubbles than it is to identify asset-price bubbles. Financial regulators and central banks often have information that lenders
have weakened their underwriting standards, that risk premiums appear to be inordinately low or that credit extension is
rising at abnormally high rates. The argument that it is hard to identify asset-price bubbles is therefore not a valid argument
against leaning against credit bubbles.

4.2.3. Macroprudential policies

Although there is a strong case to lean against credit bubbles, what policies will be most effective? First it is important to
recognize that the key principle for designing effective policies to lean against credit bubbles is whether they fix market fail-
ures. Credit extension necessarily involves risk taking. It is only when this risk taking is excessive because of market failures
that credit bubbles are likely to develop. Recognizing that market failures are the problem, it is natural to look to prudential
regulatory measures to constrain credit bubbles.
Some of these regulatory measures are simply the usual elements of a well-functioning prudential regulatory and super-
visory system. These elements include adequate disclosure and capital requirements, liquidity requirements, prompt correc-
tive action, careful monitoring of an institution’s risk-management procedures, close supervision of financial institutions to
enforce compliance with regulations, and sufficient resources and accountability for supervisors.
The standard measures mentioned above focus on promoting the safety and soundness of individual firms and fall into the
category of what is referred to as microprudential supervision. However, even if individual firms are operating prudently,
there still is a danger of excessive risk-taking because of the interactions between financial firms that promote externalities.
An alternative regulatory approach, which deals with these interactions, focuses on what is happening in credit markets in
the aggregate, referred to as macroprudential regulation and supervision.
Macroprudential regulations can be used to dampen the interaction between asset price bubbles and credit provision. For
example, research has shown that the rise in asset values that accompanies a boom results in higher capital buffers at finan-
cial institutions, supporting further lending in the context of an unchanging benchmark for capital adequacy; in the bust, the
value of this capital can drop precipitously, possibly even necessitating a cut in lending.10 It is important for research to con-
tinue to analyze the role of bank capital requirements in promoting financial stability, including whether capital requirements
should be adjusted over the business cycle. Other macroprudential policies to constrain credit bubbles include dynamic provi-
sioning by banks, lower ceilings on loan-to-value ratios or higher haircut requirements for repo lending during credit expan-
sions, and Pigouvian-type taxes on certain liabilities of financial instititons.11
Some policies to address the risks to financial stability from asset price bubbles could be made a standard part of the reg-
ulatory system and would be operational at all times--whether a bubble was in progress or not. However, because specific or
new types of market failures might be driving a particular credit bubble, there is a case for discretionary prudential policies
to limit the market failures in such a case. For example, during certain periods risks across institutions might become highly
correlated, and discretionary policy to respond to these higher-stress environments could help reduce systemic risk.

4.2.4. Monetary policy

The fact that the low interest rate policies of the Federal Reserve from 2002 to 2005 was associated with excessive risk
taking suggests to many that overly easy monetary policy might promote financial instability. Using aggregate data, Taylor
(2007) has argued that excessively low policy rates led to the housing bubble, while Bernanke (2010), Bean et al. (2010), and
Turner (2010) have argued otherwise. Although it is far from clear that the Federal Reserve is to blame for the housing bub-
ble, the explosion of microeconomic research, both theoretical and empirical, suggests that there is a case for monetary pol-
icy to play a role in creating credit bubbles. Borio and Zhu (2008) have called this mechanism the ‘‘risk taking channel of
monetary policy”.
The literature provides two basic reasons why low interest rates might promote excessive risk taking. First, as Rajan
(2005, 2006) points out, low interest rates can increase the incentives for asset managers in financial institutions to search
for yield and hence increase risk taking. These incentives could come from contractual arrangements which compensate
asset managers for returns above a minimum level, often zero, and with low nominal interest rates only high risk invest-
ments will lead to high compensation. They also could come from fixed rate commitments, such as those provided by insur-
ance companies, forcing the firm to seek out higher yielding, riskier investments. Or they could arise from behavioral
considerations such as money illusion in which they believe that low nominal rates indicate that real returns are low,
encouraging them to purchase riskier assets to obtain a higher target return.

For example, see Kashyap and Stein (1994) and Adrian and Shin (2009).
For example, see Bank of England (2009) and French et al. (2010).
262 F.S. Mishkin / Journal of International Money and Finance 73 (2017) 252–274

A second mechanism for how low interest rates could promote risk taking operates through income and valuation effects.
If financial firms borrow short and lend long, as is often the case, low interest rate increase net interest margins and increase
the value of these firms, increasing their capacity to increase their leverage and take on risk (Adrian and Shin, 2009, 2010;
Adrian et al., 2010). In addition, low interest rates can boost collateral values, again enabling increased lending. This mech-
anism is closely related to the financial accelerator of Bernanke and Gertler (1999) and Bernanke et al. (1999), except that it
derives from financial frictions for lenders rather than borrowers.
Monetary policy can also encourage risk taking in two other ways. Although desirable from a viewpoint of establishing
credibility and a strong nominal anchor, which helps stabilize the economy, more predictable monetary policy can reduce
uncertainty and contribute to asset managers underestimating risk (Gambacorta, 2009). Monetary policy which cleans up
after financial disruptions by lowering interest rates, which has been named the ‘‘Greenspan put” because this was the actual
and stated policy of the Federal Reserve when Alan Greenspan headed the Fed, can lead to a form of moral hazard in which
financial institutions expect monetary policy to help them recover from bad investments (e.g., see Tirole and Farhi, 2009;
Keister, 2010; Wilson and Wu, 2010). The Greenspan put can also increase systemic risk because it is only exercised when
many financial firms are in trouble simultaneously and so they may be encouraged to pursue similar investment strategies,
thereby increasing the correlation of returns.
Micro empirical analysis provides a fair amount of support for the risk-taking channel of monetary policy. Jimenez et al.
(2008), using Spanish credit registry data, find that low nominal interest rates, although they decrease the probability of
defaults in the short term, lead to riskier lending and more defaults in the medium term. Ioannidou et al. (2009) examine
a quasi-controlled experiment in Bolivia and find that lower U.S. federal funds rates increases lending to low quality borrow-
ers that ends up with higher rate of defaults and yet at lower interest rate spreads. Delis and Kouretas (2010), using data
from euro area banks, finds a negative relationship between the level of interest rates and the riskiness of bank lending.
Adrian and Shin (2010) discuss and provide evidence for the risk taking channel of monetary policy using more aggregate
data. They find that reductions in the federal funds rate, increase term spreads and hence the net interest margin for financial
intermediaries. The higher net interest margin, which makes financial intermediaries more profitable, is then associated with
higher asset growth, and higher asset growth, which they interpret as a shift in credit supply, predicts higher real GDP
Given the support for the risk-taking channel, does this mean that monetary policy should be used to lean against credit
bubbles? Besides some of the previously listed objections, there is the additional objections that if monetary policy is used to
lean against credit bubbles, there is a violation of the Tinbergen (1939) principle because one instrument is being asked to do
two jobs: (1) stabilize the financial sector and (2) stabilize the economy.12 Because there is another instrument to stabilize the
financial sector, macroprudential supervision, wouldn’t it be better to use macroprudential supervision to deal with financial
stability, leaving monetary policy to focus on price and output stability?
This argument would be quite strong if macroprudential policies were able to do the job. However, there are doubts on
this score. Prudential supervision is often subject to more political pressure than is monetary policy because it affects the
bottom line of financial institutions more directly. Thus they will have greater incentives to lobby politicians to discourage
macroprudential policies that would rein in credit bubbles. After all, during a credit bubble financial institutions will be mak-
ing the most money and so have greater incentives and more resources to lobby politicians to prevent restrictive macropru-
dential policies. Indeed, because of political constraints, it is not even clear that central banks have the tools to adequately
implement macroprudential policies (Fischer, 2015).
The possibility that macroprudential policies may be circumvented and so might not be able to constrain credit bubbles,
suggests that monetary policy may have to be used as well.13 But this raises another objection to using monetary policy to lean
against credit bubbles: it may not work. I am sympathetic to the view discussed earlier that tightening monetary policy may be
ineffective in restraining a particular asset-bubble because market participants expect such high rates of return from purchasing
bubble-driven assets. On the other hand, the evidence on the risk-taking channel of monetary policy suggests that there is a
stronger case that raising interest rates would help restrain lending growth and excessive risk taking. Furthermore, the theo-
retical analysis discussed immediately above suggests that if the public believes that the central bank will raise interest rates
when a credit bubble looks like it is forming, then expectations in credit markets will work to make this policy more effective.
The expectation that rates will go up with increased risk taking will make this kind of activity less profitable and thus make it
less likely that it will occur. Furthermore, expectations that rates will rise with increased risk-taking means that interest rates
will not have to be raised as much to have their intended effect.
Nonetheless, using monetary policy to lean against credit bubbles is not a monetary policy strategy that can be taken
lightly. Doing so could at times result in a weaker economy than the monetary authorities would desire or inflation that falls
below its target. This suggests that there is a monetary policy tradeoff between having the inflation forecast at the target and

Stabilizing the financial sector is not a completely separate objective from stabilising the economy because financial instability leads to instability in
economic activity and inflation. However because the dynamics of financial instability is so different than the dynamics of inflation and economic activity, for
purposes of the Tinbergen principle, promoting financial instability can be viewed as a separate policy objective from stabilising the economy.
However, as pointed out in Boivin et al. (2010), whether monetary policy will be effective in countering financial imbalances depends on the nature of
shocks. They conduct simulations that show that were financial imbalances reflect specific market failures and regulatory policies can be directed to such
failures, monetary policy is less likely to be effective. Monetary policy is likely to be more effective when financial imbalances arise from economy-wide factors.
F.S. Mishkin / Journal of International Money and Finance 73 (2017) 252–274 263

the pursuit of financial stability. Also, having monetary policy focus on financial stability might lead to confusion about the
central bank’s commitment to the inflation target, with potentially adverse effects on economic outcomes.
Another danger from having monetary policy as a tool to promote financial stability is that it might lead to decisions to
tighten monetary policy when it is not needed to constrain credit bubbles. A situation of low interest rates does not neces-
sarily indicate that monetary policy is promoting excessive risk taking. One lesson from the analysis here is that policymak-
ers, and especially monetary policymakers, will want tools to assess whether credit bubbles are developing. Research is
underway (e.g., see Borio and Lowe, 2002; Adrian and Shin, 2010) to find measures will signal if credit bubbles are likely
to be forming. High credit growth, increasing leverage, low risk spreads, surging asset prices and surveys to assess if credit
underwriting standard are being eased are pieces of data that can help central banks decide if there is imminent danger of
credit bubbles. Monitoring of credit market conditions will become an essential activity of central banks in the future and
research on the best ways of doing so will have a high priority in the future.

4.3. Dichotomy between monetary policy and financial stability policy

Before the crisis, the general equilibrium modelling frameworks at central banks did not incorporate financial frictions as
a major source of business cycle fluctuations, which naturally led to a dichotomy between monetary policy and financial sta-
bility policy in which these two types of policies were conducted separately. Monetary policy instruments would focus on
minimising inflation and output gaps. It would then be up to prudential regulation and supervision to prevent excessive risk-
taking that could promote financial instability.
However, as the discussion about how monetary policy should react to asset-price bubbles indicates, monetary policy and
financial stability policy are intrinsically linked to each other, and so the dichotomy between them is a false one. Monetary
policy can affect financial stability, while macro-prudential policies to promote financial stability can have an impact on
monetary policy. If macro-prudential policies are implemented to restrain a credit bubble, they will slow credit growth
and will slow the growth of aggregate demand. In this case, monetary policy may need to be easier in order to offset weaker
aggregate demand.
Alternatively, if policy rates are kept low to stimulate the economy, as is true currently, there is a greater risk that a credit
bubble might occur. This may require tighter macro-prudential policies to ensure that a credit bubble does not develop.
Coordination of monetary and macro-prudential policies becomes of greater value when all three objectives of price stability,
output stability and financial stability are to be pursued.
I have argued elsewhere (Mishkin, 2009 and in French et al., 2010) that the recent financial crisis provides strong support
for a systemic regulator and that central banks are the natural choice for this role. The benefits of coordination between mon-
etary policy and macro-prudential policy provide another reason for having central banks take on the systemic regulator
role. Coordination of monetary policy and macropudential policy is more likely to be effective if one government agency
is in charge of both. As anyone who has had the pleasure of experiencing the turf battles between different government
agencies knows, coordination of policies is extremely difficult when different entities control these policies.

4.4. Risk management and gradualism

As discussed earlier, a key element of the analysis of optimal monetary policy before the crisis was the linear-quadratic
framework in which financial frictions do not play a prominent role. Although the linear-quadratic framework might be rea-
sonable under normal circumstances, we have learned that financial disruptions can produce large deviations from these
assumptions, indicating that the linear-quadratic framework may provide misleading answers for monetary policy strategy
when financial crises occur.
The important role of nonlinearities in the economy arising from financial disruption suggests that policy-makers should
not only focus on the modal outcomes, as they would in a certainty equivalent world which is a feature of the linear-
quadratic framework, but should also tailor their policies to cope with uncertainty and with the possible existence of tail
risks in which there is a low probability of extremely adverse outcomes. I have argued elsewhere (Mishkin, 2010b) that
the importance of financial frictions and nonlinearities in the economy provides a rationale for a particular form of risk man-
agement approach to monetary policy.
What would this risk management approach look like? The first element of this approach is that monetary policy would
act pre-emptively when financial disruptions occur. Specifically, monetary policy would focus on what I have referred to as
macroeconomic risk (Mishkin, 2010b) – that is, an increase in the probability that a financial disruption will cause significant
deterioration in the real economy through the adverse feedback loop described earlier, in which the financial disruption
causes a worsening of conditions in the credit markets, which causes the economy to deteriorate further, causing a further
worsening of conditions in the credit markets, and so on. Monetary policy would aim at reducing macroeconomic risk by
cutting interest rates to offset the negative effects of financial turmoil on aggregate economic activity. In so doing, monetary
policy could reduce the likelihood of a financial disruption setting off an adverse feedback loop. The resulting reduction in
uncertainty could then make it easier for the markets to collect the information that facilitates price discovery, thus hasten-
ing the return of normal market functioning.
To achieve normal market functioning most effectively, monetary policy would be timely, decisive, and flexible. First,
timely action, which is pre-emptive, is particularly valuable when an episode of financial instability becomes sufficiently sev-
264 F.S. Mishkin / Journal of International Money and Finance 73 (2017) 252–274

ere to threaten the core macroeconomic objectives of the central bank. In such circumstances, waiting too long to ease policy
could result in further deterioration of the macroeconomy and might well increase the overall amount of easing that would
eventually be required to restore the economy to health. When financial markets are working well, monetary policy can
respond primarily to the incoming flow of economic data about production, employment, and inflation. In the event of a
financial disruption, however, pre-emptive policy would focus on indicators of market liquidity, credit spreads, and other
financial market measures that can provide information about sharp changes in the magnitude of tail risk to the macroecon-
omy. Indeed, even if economic indicators were strong, monetary policy would act to offset the negative impact of the finan-
cial disruption.
Second, policy-makers would be prepared for decisive action in response to financial disruptions. In such circumstances,
the most likely outcome (the modal forecast) for the economy may be fairly benign, but there may also be a significant risk of
more severe adverse outcomes. In this situation the central bank can take out insurance by easing the stance of policy further
than if the distribution of probable outcomes were perceived as fairly symmetric around the modal forecast. Moreover, in
such circumstances, the monetary policy authorities can argue that these policy actions do not imply a deterioration of
the central bank’s assessment of the most likely outcome for the economy, but rather constitute an appropriate form of risk
management that reduces the risk of particularly adverse outcomes.
Third, policy flexibility is especially valuable throughout the evolution of a financial market disruption. During the onset of
the episode, this flexibility may be evident from the decisive easing of policy that is intended to forestall the contractionary
effects of the disruption and provide insurance against the downside risks to the macroeconomy. However, it is important to
recognize that in some instances financial markets can also turn around quickly, thereby reducing the drag on the economy
as well as the degree of tail risk. Therefore, the central bank would monitor credit spreads and other incoming data for signs
of financial market recovery and, if necessary, take back some of the insurance; thus, at each stage of the episode, the appro-
priate monetary policy may exhibit much less smoothing than would be typical in other circumstances.
The risk management approach outlined here is one that abandons the prescription of the linear-quadratic framework
that the optimal monetary policy would involve gradual changes. Instead, with this approach aggressive actions by central
banks to minimise macroeconomic risk would result in pre-emptive, large changes in monetary policy. This was an impor-
tant feature of the conduct of conventional monetary policy by the Federal Reserve during the crisis. In September 2007, just
after the initial disruption to financial markets in August, the Federal Reserve lowered the federal funds rate target by 50
basis points (0.5 percentage point) even though the economy was displaying substantial positive momentum, with real
GDP growth quite strong in the third quarter. The Federal Reserve was clearly not reacting to current economic conditions,
but rather to the downside risks to the economy from the financial disruption. Subsequently, the Federal Reserve very
rapidly brought the federal funds rate target from its level of 5¼% before the crisis, in September 2007, to 2% in April
2008. Then, after the Lehman Brothers collapse in September 2008, the Federal Reserve began another round of rapid interest
rate cuts, with the federal funds rate target lowered by 75 basis points in December 2008, bringing it down to the zero lower
bound. Clearly, the Federal Reserve had abandoned gradualism.14
One danger from aggressive, pre-emptive actions that are taken as part of the risk management approach is that they
might create the perception that the monetary policy authorities are too focused on stabilising economic activity and not
enough on price stability. If this perception occurs, the pre-emptive actions might lead to an increase in inflation expecta-
tions. The flexibility to act pre-emptively against a financial disruption presupposes that inflation expectations are well
anchored and unlikely to rise during a period of temporary monetary easing. To work effectively, the risk management
approach outlined here thus requires a commitment to a strong nominal anchor. A risk management approach therefore pro-
vides an additional rationale for a flexible inflation targeting framework, and, as I have argued elsewhere (Mishkin, 2008a), a
strong nominal anchor can be especially valuable in periods of financial market stress, when prompt and decisive policy
action may be required as part of a risk management approach in order to forestall an adverse feedback loop.

4.5. International monetary policy coordination

There is a long literature on international policy coordination (see the survey in Frankel, 2016). The conclusions on how
beneficial this coordination is are mixed. The prevailing view in central banks before the global financial crisis was that
under normal conditions, the benefits of international policy coordination were small. Hence, monetary policy should focus
on domestic considerations, i.e., monetary policy that seeks to stabilize both domestic inflation and output, would produce
the most desirable economic outcomes. This, of course did not mean that international conditions were irrelevant to policy
decisions: clearly what is happening to exchange rates and foreign demand for domestic goods and services impacts the
domestic inflation rate and aggregate output. However, central banks typically did not worry about international develop-

One period before the crisis when the Federal Reserve abandoned gradualism was during the LTCM (Long-Term Capital Management) episode, when it
lowered the federal funds rate target by 75 basis points within a period of a month and a half in the autumn of 1998. This action fits into the risk management
approach described here. However, once the shock dissipated, the Federal Reserve did not take away the insurance provided by the funds rate cuts, as the risk
management approach outlined here suggests would have been appropriate. I consider this to be one of the serious monetary policy mistakes made by the
Federal Reserve under Greenspan. Not only did inflation subsequently rise above the desired level, but the actions also indicated that the Federal Reserve would
react asymmetrically to shocks, lowering interest rates in the event of a financial disruption, but not raising them upon reversal of the adverse shock. This
helped contribute to the belief in the ‘‘Greenspan put” that will be discussed below.
F.S. Mishkin / Journal of International Money and Finance 73 (2017) 252–274 265

ments over and above their effects on domestic inflation and output. The result was that international policy coordination
between central banks was quite rare.
Central bankers were however aware that disruption to financial markets in other countries could create financial insta-
bility in their own countries. The lesson from the crisis that international linkages in financial markets have become stronger
and that financial disruptions in other countries can have very large impacts on the domestic economy suggests that mon-
etary policy coordination when financial markets are threatened is far more necessary.
During the global financial crisis there were several instances of major monetary policy coordination. During the crisis
there was a major demand for dollar liquidity by banks in countries outside the United States. Central banks in these coun-
tries could not produce dollar liquidity on their own because they only have control of liquidity denominated in domestic
currency. This shortage of dollar liquidity outside of the United States had the potential to further disrupt financial markets
in these countries, which then could have bounced back to further disrupt U.S. financial markets. To prevent this from hap-
pening, the Federal Reserve and central banks in other countries arranged swap lines (Federal Reserve loans of dollar depos-
its to these central banks in exchange for deposits denominated in their currencies). On December 12, the Federal Reserve
announced a swap line with the European Central Bank and the Swiss National Bank in the amount of $24 billion. Then in the
aftermath of the Lehman Brothers collapse, on September 18, 2008, the Fed announced a $180 billion expansion of swap
lines not only to the ECB and the Swiss National Bank, but also to the Bank of Canada, the Bank of England, and the Bank
of Japan. On September 29, 2008, the Fed announced another massive expansion of the swap lines, to the tune of $330 billion,
increasing the total swap lines available to $620 billion. On October 13, 2016, the Fed then announced that the swap lines
with the ECB, the Bank of England and the Swiss National Bank would no longer be limited to a fixed amount, rather they
would be enable these central banks to provide dollar funding to banks in ‘‘quantities sufficient to meet demand”. By the
time the crisis was over, an additional nine central banks participated in swap lines with the Federal Reserve: the Reserve
Bank of Australia, the Banco Central do Brasil, Danmarks National Bank, the Bank of Korea, the Banco de Mexico, the Reserve
Bank of New Zealand, Norges Bank, the Monetary Authority of Singapore, and the Sveriges Riksbank. Remarkably, these swap
facilities were not only extended to central banks in advanced economies, but also to central banks in emerging market
countries such as Mexico and Brazil. These swap lines were crucial to the recovery of financial markets in these countries,
and importantly helped emerging market countries to escape from serious adverse effects from the global financial crisis.
Although international coordination of liquidity provision during a financial crisis has historical precedents before the
recent crisis, there was an extraordinary act of international monetary policy coordination on October 8, 2008. On that date,
the Federal Reserve, the Bank of Canada, the Bank of England, the European Central Bank, the Sveriges Riksbank and the
Swiss National Bank announced a coordinated reduction of the policy interest rate in their respective countries. There are
two ways to understand why it made sense to pursue this unprecedented monetary policy coordination. First, increased
international financial linkages, which made the financial crisis a global one, led to a large common shock to these countries.
Expansionary monetary policy was therefore warranted in all these countries and having it coordinated had the potential to
make these interest rate cuts even more effective by showing that all these central banks understood the need to continue
pursuing expansionary policy to stimulate the economy. Furthermore, as emphasized in Mishkin (2008b), monetary policy
needs to be preemptive in order to lower risk premiums and thereby help contain the adverse feedback loop inherent in
financial crises, in which the rise in risk premiums causes the economy to contract, which raises risk premiums, which
causes the economy to contract, and so on. Coordinated interest rate cuts promote market expectations that these central
banks would stay ahead of the curve and so help stabilize financial markets.
Central banks have viewed these instances of international monetary policy coordination to be a great success (e.g., see
Bernanke, 2008). Hence, the global financial crisis has convinced central bankers that there is an increased need for interna-
tional monetary policy coordination to stabilize financial markets when they experience financial disruptions. However, the
case for international policy coordination during more normal times when financial markets are performing well is much
less clear.

4.6. Fiscal dominance and monetary policy

The key fact driven home by the recent financial crisis that financial crises are often followed by fiscal crises indicates that
the view that ‘‘Inflation is always and everywhere a monetary phenomenon” requires modification. Before the crisis, central
banks, at least in advanced countries, could take the view that governments would pursue a long-run budget balance so that
the amount of government debt to GDP would be at sustainable levels. In the aftermath of the crisis, we have seen a huge
explosion in government debt, either because of decreased revenue and increased government spending to stimulate the
economy, as in the United States, or because of bailouts of the financial sector, as in Ireland and Spain. This has raised
the prospect that governments may no longer be able or willing to pay for their spending with future taxes. Either this means
that the government’s intertemporal budget constraint will have to be satisfied by issuing monetary liabilities or, alterna-
tively, by a default on the government debt.
This situation in which government budget deficits are out of control is described as fiscal dominance because the mon-
etary authorities no longer will be able to pursue monetary policies that will keep inflation under control. If a default occurs,
the resulting collapse in the value of the domestic currency leads to high inflation, and this is the experience we have seen in
many emerging market countries, Argentina in 2002 being one recent prominent example. Even when countries are in a cur-
rency union where they do not have their own currency, default is likely to lead to an expulsion from the currency union and
266 F.S. Mishkin / Journal of International Money and Finance 73 (2017) 252–274

the subsequent depreciation of the newly created domestic currency will then result in high inflation. Indeed, this is the pro-
spect that currently faces Greece, where a disorderly default would result in an exit from the Eurozone with not only high
inflation, but also a total collapse of the banking system.
If default does not occur, fiscal dominance still results in high inflation even if the central bank does not want to pursue
inflationary policies and has a strong commitment to an inflation target. It is still true that inflation will have a monetary
element because high-powered money will increase, so in that sense, the famous adage is still true; this is a situation that
Sargent and Wallace (1981) in their famous paper described as ‘‘unpleasant monetarist arithmetic.” Fiscal dominance will at
some point in the future force the central bank to monetize the debt, so even tight monetary policy in the present will not
prevent inflation. Indeed, as Sargent and Wallace (1981) points out, tight monetary policy might result in inflation being
even higher.
To see how this would play out in the current context, we need to recognize that fiscal dominance puts a central bank
between a rock and a hard place. If the central bank does not monetize the debt, then interest rates on the government debt
will rise sharply, causing the economy to contract. Indeed, the lack of monetization fiscal dominance may result in the gov-
ernment defaulting on its debt, which would lead to a severe financial disruption, leading to an even more severe economic
contraction. Hence, the central bank will in effect have little choice and will be forced to purchase the government debt and
monetize it, eventually leading to a surge in inflation.
We already are seeing the beginning of this scenario in Europe. The threat of defaults on sovereign debt in countries such
as Ireland, Portugal, Spain and Italy led the ECB to purchase individual countries’ sovereign debt, with the eventual
announcement in September 2012 that if necessary it will engage in what it has called Outright Monetary Transactions
(OMT). These OMT transactions involve purchases of sovereign debt in the secondary markets of these countries subject
to their governments accepting a program of conditionality from the European Financial Stability Facility/European Stability
The ECB describes these transactions as monetary in nature because they ‘‘aim at safeguarding an appropriate monetary
policy transmission,” with the reasoning that they are ‘‘monetary” because low ECB policy rates are not translating into low
interest rates in these countries. Nonetheless, these transactions are in effect monetization of individual countries’ govern-
ment debt (even if they are sterilized for the Eurosystem as a whole). The ECB’s purchase of individual countries’ sovereign
debt arises from the difficult position it faces. If the ECB does not do what ECB President, Mario Draghi, has described as ‘‘do-
ing whatever it takes” to lower interest rates in these countries, the alternative is deep recessions in these countries or out-
right defaults on their debt that would create another ‘‘Lehman moment” in which the resulting financial shock would send
the Eurozone over the cliff.
It is true that the ECB’s bond purchasing programs will not result in inflation if the sovereigns whose debt is being pur-
chased get their fiscal house in order, and so fiscal dominance is avoided. However, this is a big if. Indeed, there is a danger
that Europe may find itself with what I will refer to as the ‘‘Argentina problem.” Argentina has had a long history of fiscal
imbalances that have led to high inflation, and this continues to this day. The problem in Argentina is that its provinces over-
spend and are always bailed out by the central government. The result is a permanent fiscal imbalance for the central gov-
ernment, which then results in monetization of the debt by the central bank and high inflation. Europe could be facing the
same problem. With bailouts of sovereigns in the Eurozone, the incentives to keep fiscal policy sustainable in individual
countries have been weakened, leading to a serious moral hazard problem. Budget rules have been proposed to eliminate
this moral hazard, but as the violation of the Growth and Stability Pact rules by Germany and France a number of years
ago illustrates, these budget rules are very hard to enforce. However, we have seen success in some countries in this respect,
with Chile being a notable example.
Thus, the Eurozone has the possibility of becoming more like Argentina (which of course is why Germans are horrified),
with fiscal dominance a real possibility, and high inflation the result. This possibility is a very real one despite what the
Maastricht Treaty specifies about the role of the ECB and what policymakers in the ECB want.
Although the United States is not in nearly as dire a situation because the no-bailout policy for state and local govern-
ments that has evolved over many years avoids the ‘‘Argentina problem,” the possibility of fiscal dominance is real. The
U.S. government is fully capable of avoiding fiscal dominance and achieving long-run fiscal sustainability by reigning in
spending on entitlements (Medicare/Medicaid and Social Security) while increasing tax revenue (but not necessarily tax
rates). Indeed one such plan was proposed by the Simpson-Bowles Commission appointed by President Obama. However,
when the Commission’s recommendations were announced, President Obama did not embrace them, nor did the Republican
Party, which refused to consider any increase in tax revenue. The fact that in the 2016 election, candidates for President from
either party have not described serious plans to reign in entitlements is to say the least, very discouraging.
There has been a great deal of attention paid to the Federal Reserve’s quantitative easing policies as a potential threat to
price stability in the United States. The concern is that the expansion of the Federal Reserve’s balance sheet, as a result of
quantitative easing, will unhinge inflation expectations and thus create inflation in the near future. However, the far greater
threat is on the fiscal front. If U.S. government finances are not put on a sustainable path, we could see the scenario I have
outlined above, where markets lose confidence in U.S. government debt, so that prices fall and interest rates shoot up, and
then the public might expect the Federal Reserve to be forced to monetize this debt. What would then unhinge inflation
expectations would be the fear of fiscal dominance, which could then drive up inflation very quickly.
The bottom line is that no matter how strong the commitment of a central bank to an inflation target, fiscal dominance
can override it. Without long-run fiscal sustainability, no central bank will be able to keep inflation low and stable. This is
F.S. Mishkin / Journal of International Money and Finance 73 (2017) 252–274 267

why central bankers must lobby both in public and in private to encourage their governments to put fiscal policy on a sus-
tainable path.

4.7. Forward guidance

During normal times, the monetary authorities conduct monetary policy using conventional tools, principally by conduct-
ing open market operations in short-term government debt in order to set a short-term policy rate, such as, the federal funds
rate in the United States. However, with the zero-lower-bound constraint becoming binding in so many advanced economies
in recent years, central banks have had to adopt nonconventional monetary policy tools, including: (1) liquidity provision in
which central banks expand lending to both banks and other financial institutions; (2) asset purchases of both government
securities and private assets to lower borrowing costs for households; (3) quantitative easing, in which central banks greatly
expand their balance sheets; and (4) forward guidance, in which central banks manage expectations by announcing a path
for future policy rates. Because of space considerations, I am not going to discuss the first three policy tools, but rather will
focus on recent research that I have been engaged in on forward guidance (Feroli et al., 2016).

4.7.1. Theory and evidence

As discussed earlier, optimal monetary policy involves a central bank’s commitment to a target criterion, which involves
trading off deviations of inflation from its target level with the output gap, the deviation of output from potential. Optimizing
this target criterion then results in the setting of the policy instrument, such as the federal funds rate, which reacts to the
current and expected future states of the economy. Forward guidance so the public understands how the central bank sets
the policy instrument reaction function can improve monetary policy performance because it leads to the right expectations
To see why consider a negative shock to aggregate demand when both the inflation gap and output gap are at zero. The
result would be that both the inflation and output gaps would turn negative in the future and an optimal monetary policy
reaction function would indicate that the federal funds rate path would be lowered. If the Federal Reserve’s reaction function
is well understood by the public, then without the Fed taking any actions, expectations of the future federal funds rate would
decline, which would result in lower longer-term interest rates and stimulate the economy. The result would then be an
immediate offset to the negative aggregate demand shock which would help stabilize the economy.
Another way of stating this result is that successful central bank communication about the monetary policy reaction func-
tion would enable the markets to do a lot of the work for the central bank. If the monetary policy reaction to shocks is pre-
dictable, expectation dynamics work to tighten or loosen financial conditions appropriately when there are shocks to the
One way to provide information about the monetary policy reaction function is for the central bank to conduct data-based
forward guidance, that is, provide information on the future path of the policy rate conditional on the data that is expected
over the policy horizon. This means not only providing information on the policy path given the central bank’s forecast, but
also to indicate how that path changes if and when the central bank’s forecast changes.
The second type of forward guidance is time-based forward guidance in which a central bank commits to set the policy
rate at specific levels at specific calendar dates. An extreme version of time-based forward guidance would be a central bank
committing not to raise interest rates from their current level for several years. Such a commitment would ignore incoming
information, which is why the forward guidance is time-based.
There is an important subtle issue about the benefits of a central bank communicating a predictable policy reaction func-
tion. At first glance, the analysis seems to provide a very strong argument for a central bank adopting an instrument rule like
the Taylor rule. After all, a Taylor rule is a very simple way of specifying a predictable monetary policy reaction function.
However, the theory of optimal monetary policy suggests that the policy reaction function changes over time, either as mon-
etary policymakers learn more about how the economy works or when the structure of the economy changes. Furthermore,
the policy reaction function might need to be modified when there are unforeseen contingencies that were previously not
part of the reaction function, but now need to be introduced into the reaction function. Judgement should also certainly be a
feature of optimal policy as demonstrated by Svensson (2005) and should also be part of a monetary policy reaction function.
A Taylor rule, which does not change over time, can therefore be far from an optimal policy.
Unlike a Taylor rule, data-dependent forward guidance can be consistent with optimal monetary policy, but this requires
that it changes if the optimal monetary policy reaction function changes. This requires that projections of the future policy
path not only must be altered when forecasts of the economy change, but also when the central bank has reasons to expect
that the model of the economy is changing. Data-dependent forward guidance thus requires substantial communication to
explain not only the past policy reaction function, but also any reasons for changes in the reaction function. Explaining how
and why the policy reaction function might be changing, a requirement of data-dependent forward guidance, is by no means
an easy task. As a result, it might be hard to credibly communicate data-based forward guidance.
Consider what optimal, data-dependent forward guidance might have looked like when the global financial crisis started
in August of 2007. At the time, inflation was rising and the economy was still growing rapidly in the third quarter. The Fed-
eral Reserve dramatically deviated from its previous reaction function, which was not too far off from a Taylor rule, by
aggressively cutting the federal funds rate even before the economy and inflation had turned down. If the Fed had been pro-
viding forward guidance, it would have needed to explain that the disruption to financial markets required a change in the
268 F.S. Mishkin / Journal of International Money and Finance 73 (2017) 252–274

policy reaction function, with much easier monetary policy in the future in response to financial shocks than had been antic-
ipated earlier. If this communication led to the markets understanding that there had been a shift in the policy reaction func-
tion, longer-term interest rates would have fallen more rapidly in response to news that the financial disruption was getting
worse. This would have helped effective monetary policy be even more expansionary than it otherwise would have been,
helping offset some of the negative shocks to the economy from the ongoing financial crisis.
The central argument of Feroli et al. (2016) is that Federal Reserve communication in recent years has relied too heavily
on time-based forward guidance. There are three main disadvantages to time-based forward guidance. The first is that time-
based forward guidance can lead to bad expectations dynamics by market participants, which can in turn reduce the sensi-
tivity of interest rates to macroeconomic news. In many circumstances, this is the opposite of what the central bank is trying
to accomplish. Second, time-based forward guidance my lower uncertainty, thereby encouraging leverage which might pro-
mote financial instabiliiy. Third, time-based forward guidance can constrain central bankers in a sub-optimal fashion, which
can then lead to market confusion and a reduction in the credibility of the central bank. We look at each of these in turn. Expectation dynamics. The benefits of forward guidance in setting the correct expectations dynamics occur only if the
forward guidance is data-dependent, and not if it is time-dependent. Forward guidance that is data-dependent is conditional
on the state of the economy. As the state of the economy changes, the projected path of the policy rate should change as well.
For example, if there is a strong employment report in which there is higher job growth and stronger growth of wages so that
the Fed and the market’s forecast of real GDP growth and inflation rises, then the projected policy path and longer-term
interest rates should shift upwards in order to stabilize output and inflation.
If instead, the forward guidance is time-dependent--the Fed says that the federal funds rate will be set to particular values
at particular date-- then when the inflation and output forecasts rises, there is no change in the policy path. Now the inflation
shock does not lead to an automatic effective tightening of monetary policy.
Indeed, time-dependent forward guidance can lead to expectation dynamics that make things even worse. Again consider
the situation in which the positive employment report leads to expectations that inflation will be higher than previously
expected. With time-dependent forward guidance, the projected policy path does not change, but expected inflation rises.
This means that the expected path of future real interest rates, policy interest rates minus expected inflation, now declines.
The effect of the positive employment report shock is then an effective easing of monetary policy, the opposite to what
would be an optimal effective monetary policy response.
This undesirable feature of time-dependent forward guidance is exactly the same problem created by the zero lower
bound for the policy rate, as discussed in Eggertsson and Woodford (2003). They point out that when there is a negative
aggregate demand shock and the policy rate is at the zero lower bound, then a negative aggregate demand shock leads to
a decline in expected inflation and therefore a rise in real interest rates, which further weakens aggregate demand. Negative
aggregate demand shocks when the zero lower bound is binding therefore can lead to prolonged economic downturns. Time-
dependent forward guidance creates a similar problem because, just as occurs when the policy rate is at the zero lower
bound, a negative aggregate demand shock leaves the projected future path of the policy rate unchanged, so that real interest
rates rise, thereby propagating the negative aggregate demand shock further.
Another way of stating the above argument is that data-dependent forward guidance leads to beneficial expectation
dynamics, while time-dependent forward guidance leads to perverse expectation dynamics. Does empirical evidence sup-
port the theory that time-based forward guidance leads to bad expectation dynamics because it leads to interest rates
becoming insensitive to macroeconomic news? Feroli et al. (2016) find that the answer is yes.
Using the methodology developed by Swanson and Williams (2014), they evaluate how responsive interest rates were to
economics news when the Federal Reserve used time-based forward guidance, data-based forward guidance or not forward
guidance. Chart 3.3 from Feroli et al. (2016) reproduced as Fig. 1 below shows that based forward guidance, data-based for-
ward guidance, or no forward guidance. As the chart shows, date-based forward guidance is associated with lower sensitivity
of interest rates to macroeconomic news at all of the maturities we examine.
Further, the result in Fig. 1 is not driven by the zero-lower bound constraint during the post Great-Recession period. Even
excluding the zero lower bound period, the sensitivity of interest rates to macroeconomic news is lower during periods in
which FOMC communication on forward guidance is more strongly time-dependent. Uncertainty and leverage. The knowledge that monetary policy actions are certain at given date may lull the markets
into thinking there is less uncertainty in the economy than is actually the case. The result may be an underassessment of risk,
leading to excessive risk taking. Indeed, the almost total predictability of FOMC actions from 2004 to 2006 was associated
with very low risk premiums in credit markets. The predictability of monetary policy in this period may therefore have con-
tributed to the excessive risk taking that ultimately helped trigger the global financial crisis. Empirical evidence in Feroli
et al. (2016) supports this conjecture. Time-based forward guidance is associated with lower volatility of interest rates,
and this lower volatility is associated with higher leverage of hedge fund clients of a large prime brokerage. Time-based forward guidance can box in monetary policy and weaken central bank credibility. Another disadvantage of
time-based forward guidance is that it effectively boxes in the central bank, when new data suggests a need to revise the
policy path. One side of the box is that there may be a tendency to stick to a previously announced path. A particularly cogent
example is the period from 2004 to 2006 when the FOMC announced that ‘‘policy accommodation can be removed at a pace
F.S. Mishkin / Journal of International Money and Finance 73 (2017) 252–274 269

Fig. 1. Sensitivity of interest rates to economic news and forward guidance, 2001–2015. Source: Feroli et al. (2016).

that is likely to be measured,” and then raised the federal funds rate at seventeen consecutive FOMC meetings by exactly 25
basis points. In this case, policy actions at each meeting were not reacting to current data, these actions were almost surely
not consistent with an optimal reaction function. Indeed, monetary policy during this period has been subject to severe crit-
icism. In 2007 and 2008 inflation overshot any reasonable estimate of the Fed’s desired inflation objective. Some critics (e.g.,
Taylor, 2007) have even argued that monetary policy during this period was the primary cause of the housing bubble, whose
collapse helped bring on the most severe financial crisis since the Great Depression.
The other side of the box occurs if a central bank decides to deviate from a previously announced policy path. In this case,
markets may take the view that the central bank has flip-flopped and broken its word which damages the central bank’s
credibility. Feroli et al. (2016) examine three recent episodes of time-based forward guidance in which the Federal Reserve
backtracked on its prior announced policy path: the June 2013 taper tantrum when markets reacted negatively to news that
the Federal Reserve would curtail purchases of long-term securities unexpectedly, and the September 2013 and September
2015 FOMC meetings in which market expectations had been set up earlier in the year for action later in the year (Septem-
ber) of monetary policy tightening, which did not then occur. In these three instances, the Federal Reserve received partic-
ularly low scores for communication in a survey of primary dealers conducted by the Federal Reserve Bank of New York. Time-based forward guidance is easier to explain. Time-based forward guidance, however, does have one potential
advantages over data-based guidance. Data-based guidance can be very hard to explain because it is not always easy to
describe the monetary policy reaction function, and this is particularly true when the monetary policy authorities are not
responding directly to quantifiable economic data, but rather to judgement about less quantifiable factors that could have
an important impact on the economy. The possible lack of clarity of data-based forward guidance may sometimes make
it ineffective, either because the market does not understand it, or may not find it credible. Time-based forward guidance,
on the other hand, is easy to explain and is much clearer. Also its simplicity makes it more credible because it is easier to
assess whether it is being carried out or not. Time-based guidance is not only more easily understood, but also for that reason
more powerful than data-based guidance. For example, strong time-based forward guidance in both August 2011 and Octo-
ber 2015 shifted market expectations of future interest rates dramatically. Summary: Lessons about the effectiveness of forward guidance. The analysis above can be summarized by the following
six lessons about the effectiveness of forward guidance.

1. Data-based forward guidance has desirable expectation dynamics which allows markets to do a lot of the work for central
2. Time-based forward guidance has undesirable expectation dynamics which can amplify negative shocks.
3. Empirical evidence supports a weaker response to macroeconomic news when there is time-based forward guidance.
4. Empirical evidence finds that time-based forward guidance results in lower uncertainty, and although at times this might
be desirable when the economy requires more stimulus, it does lead to higher leverage which could make the financial
system less stable.
270 F.S. Mishkin / Journal of International Money and Finance 73 (2017) 252–274

5. Time-based forward guidance has sometimes put the Federal Reserve in a box leading either to inappropriate monetary
policy (2004–2006) or a view that the Fed has flip-flopped, leading to confusion and weakening of its credibility (June
2013, September 2013, September 2015).
6. Time-based forward guidance, however, does have the advantage that it can be more powerful because it is easily

4.7.2. Recommendations to improve forward gudiance

The lessons above provide guidance as to how Fed communication about forward guidance can be improved. Feroli et al.
(2016) discuss four possible suggestions to improve communication about forward guidance.

1. Time-based forward guidance should be used in only very unusual circumstances: (1) when the zero-lower-bound on monetary
policy is binding and more expansionary monetary policy is required. And (2) when all other efforts to communicate the central
bank’s reaction function to markets have been unsuccessful. However, time-based forward guidance should not be used only
because market forecasts of economic outcomes differ from Federal Reserve’s forecasts.

The lessons above suggest that time-based forward guidance has several undesirable attributes. Not only does it lead to
undesirable expectation dynamics, but it puts the monetary policy authorities in a box, in which they either stick to the time-
based forward guidance and pursue inappropriate policies, or alternatively deviate from this forward guidance, which can
cause confusion and weakens the Fed’s credibility. These undesirable characteristics of time-based forward guidance might
lead to the conclusion that time-based forward guidance should never be used.
In unusual circumstances, such as when monetary policy is constrained by the zero-lower-bound and it needs to be far
more expansionary, time-based forward guidance might be the most effective monetary policy tool available to stimulate
the economy. Other policy tools may have undesirable consequences, e.g., quantitative easing expands the Fed’s balance
sheet that could lead to problems in the future (see Greenlaw et al., 2013), while data-based forward guidance may be less
effective and/or less credible because it is less easily understood. In situations like this, it may be better to pursue time-based
forward guidance than doing nothing at all. In this light, time-based guidance may have been called for and appears to have
been used effectively during 2009–2013 period when the zero-lower bound was binding and yet slack in the economy was
very large and the inflation rate was way too low. The Federal Reserve needed to stimulate the economy and the time-based
forward guidance used at the time was employed effectively alongside quantitative easing to lower long-term interest rates
and stimulate the economy.
Is time-based forward guidance ever justified when the zero lower bound is not binding? This is a more controversial
question. Time-based forward guidance should not be used just because the central bank’s forecasts of the economy disagree
with the market’s forecasts. However, time-based forward guidance away from the zero lower bound could be justified when
the market’s perception of the central bank’s reaction function is incorrect, and all other efforts by the central bank to com-
municate its reaction function have failed. There are dangers in following this approach, because it may be hard to distin-
guish whether the market disagreement with the central bank on the future policy path is the result of differences in
forecasts on economic outcomes or the difference in views on the central bank’s reaction function. In view of this uncertainty,
central banks should exercise extreme caution before using time-based forward guidance for this purpose.

2. Data-based forward guidance in which there is a projected path of policy rates may be too hard to explain and make credible, so
it might be better not to do this type of forward guidance at all and instead revert to a weaker form of forward guidance.

Data-based forward guidance with a projected path of policy rates is a set of guidelines provided by the central bank that
explains what interest rates would be expected to prevail given different possible future economic circumstances. Such data-
based forward guidance creates desirable expectation dynamics that encourages markets to do some heavy lifting for the
central bank, for example by immediately easing financial conditions when the economy is hit by negative shocks. However,
this desirable feature of data-based forward guidance depends on two big ifs: It only produces desirable expectation dynam-
ics if it is clearly understood by markets and if it is credible. Another way of saying this is that data-based forward guidance is
darn hard to do.
Not only is data-dependent forward guidance hard to do, but as we have seen once there is a projected path of policy
rates, even if the central bank clearly states that the actual path depends on the data outcomes, the markets and media
may not get this. Thus trying to get any forward guidance to be data-dependent may not work and will always be interpreted
as time-dependent. Then pursuing forward guidance even if data-dependence is intended, may lead to the undesirable
expectations dynamics associated with time-dependence.
In addition, because data-based forward guidance in which there is a projected policy path is hard to explain, it is not
clear that this form of data-based forward guidance will provide more information on the monetary policy reaction function
than no forward guidance when the zero-lower-bound is not binding. Another way of saying this is that actions may speak
louder than words so data-based forward guidance using a projected policy path might not be more effective than no guid-
ance at all. Fig. 1 provides some support for this view because it shows that longer-term interest rate reactions to data are
just as strong when there is data-based forward guidance as when there was no forward guidance at all.
F.S. Mishkin / Journal of International Money and Finance 73 (2017) 252–274 271

Given these problems, Feroli et al. (2016) suggest that it might be better not to provide forward guidance on the future
policy path at all. Indeed some central banks, such as the Bank of Canada, have argued that forward guidance of this type
should be abandoned in normal times. Governor Carney (2012) stated that ‘‘Overall research has not generally found that
publishing a path leads to better outcomes”; while Governor Poloz (2014) stated that ‘‘Essentially, the net effect of dropping
forward guidance is to shift some of the policy uncertainty from the central bank’s plate back onto the market’s plate, a more
desirable situation in normal times.”
The Bank of Canada has avoided providing explicit forward guidance on the future path of the policy rate with one notable
exception, April 2009 when it committed to keep its policy rate at 0.25% for a period of time. However, this period was one
where the zero-lower-bound was binding and the Canadian economy was weak and inflation too low, so the use of time-
based forward guidance can be justified.
However, there is a weaker form of data-based forward guidance, which has been used by the Federal Reserve in its reg-
ular post-meeting statements since their inception in 1999. This does not involve a projected path of policy rates but does
specify a ‘‘balance of risks” that is tied to specific economic outcomes and serves as an implicit ‘‘policy bias.” The advantage
of this approach is that it provides some forward guidance in the near term, but has less risk of a market misinterpretation
that it is a time-based commitment. Because it does not directly discuss the future policy path, this weak form of forward
guidance may convey less information about the reaction of future policy to incoming economic data. This approach could be
improved by reverting to the more explicit policy bias or directive tilt formulation that was initially introduced in 1999 but
soon dropped in favor of the less explicit balance of risks formulation. The balance of risks refers to economic conditions, and
on some occasions these risks have been conflicting. For example, in the late summer and fall of 2007 as the financial crisis
was growing, the Committee saw both upside risks to inflation and downside risks to growth. While it did give some indi-
cation of how these risks were balanced, a clearer signal to the markets would have been to say more explicitly how it saw
the current policy bias, or which direction it expected to see policy move if a change were to occur at the next meeting.

3. Make forward guidance more data-dependent by emphasizing the uncertainty around the policy path and how the path would
change with economic outcomes.

Despite the difficulties of doing data-dependent forward guidance with a future policy rate path successfully, there are
two arguments for central banks to stick with this form of forward guidance, but make it more effective.
First, with the publication of the projected policy rate path, a central bank is stuck with this form of forward guidance.
One problem with increases in transparency is that they can never be taken back. Once the increase in transparency occurs,
going back on it is likely to be viewed by the public and the politicians as an attempt to hide something. This would be par-
ticularly true in the current political environment in which the Federal Reserve is continually under attack. One example is
the taping of the FOMC meetings and publication of transcripts five years later. Mishkin (2004) has argued that transparency
can go too far and that publication of the transcripts has been detrimental to good policymaking. Not being able to take back
transparency means that the policy rate projections are here to stay and the Fed cannot avoid forward guidance because
these projections will be interpreted as such.
Second, data-based forward guidance with a projected policy path can provide more information about the policy reac-
tion function than no forward guidance at all. Clearly, when the zero-lower-bound is binding so there is no available action
on the policy rate, there is nothing to be gleaned from a central bank’s policy actions about its reaction function. However,
even when the zero-lower-bound is not binding, the information about the policy reaction function can only be obtained
over time as more data on policy actions become available. Furthermore, there are times when either unforeseen circum-
stances or learning about how the economy works requires a change in the reaction function. Deriving the policy reaction
function from past data would then be misleading about the current policy reaction function. Data-based forward guidance
using a policy path rate, in contrast to no guidance, can provide information on changes in the policy reaction function
because of unforeseen events or changes in a central bank’s view of how the economy works.
But, as discussed, data-based forward guidance which provides a projected policy path is hard to do and may lead to
interpretation as time-based. Feroli et al. (2016) has two recommendations as to how forward guidance can be improved
to avoid these problems.
First, any discussion of data-based forward guidance requires that the public and markets understand that there is
tremendous uncertainty about the outcomes of the actual policy path because of uncertainty about future economic data.
One excellent approach to doing so is that used by the central bank of Norway, the Norges Bank. The Norges bank does pro-
vide a baseline projected policy path, but it also provides a fan chart showing the confidence intervals around the baseline
policy path. However, the governance structure of the Federal Reserve System makes providing such a fan chart very diffi-
cult. There are up to nineteen participants (seven governors and twelve Federal Reserve Bank presidents) in the FOMC meet-
ing that make policy decisions. It would be extremely difficult to derive a probability distribution for the path of future policy
rates from these participants.
Nevertheless, even if a fan chart for the future path of the policy rate is impossible to produce, Federal Reserve officials
could provide far more communication on how uncertain the future policy path actually is. Indeed, as is true for any prob-
ability distribution, Federal Reserve officials could emphasize that the probability that the actual policy path will match the
median of the FOMC participants’ policy path is necessarily near zero. Fed communication by individual FOMC participants,
particularly the Chair, should provide far more information on the uncertainty about where future policy rates might be.
272 F.S. Mishkin / Journal of International Money and Finance 73 (2017) 252–274

Indeed, one possibility is that individual FOMC participants could provide information about how uncertain they are about
their views of where policy rates should be in the future.
Second, information about how the policy path might change if data comes in differently than expected would provide far
more information about the policy reaction function than is currently provided. The Norges Bank does this by providing sev-
eral scenarios as to how the policy path would change when economic outcomes change. Again, because the FOMC cannot
speak with one voice, it might be up to FOMC participants to describe these different scenarios. Alternatively, the FOMC
might delegate to the Chair to provide information on how the committee’s view of the future policy path might change
under different scenarios for data outcomes.

5. Concluding remarks

The global financial crisis has led both academic economists and central bankers to rethink how monetary policy should
be conducted. In this lecture I have argued that rethinking is needed in seven areas: how inflation targeting should be con-
ducted, how monetary policy should respond to asset price bubbles, whether the dichotomy between monetary policy and
financial stability policy still holds, how risk management should be pursued, how fiscal dominance can impact monetary
policy, whether there should be more international monetary policy coordination to cope with financial disruptions, and
how forward guidance on the path of future policy rates can be improved.


Adrian, T., Shin, H.S., 2009. Money, liquidity and monetary policy. Am. Econ. Rev. 99 (2), 600–605.
Adrian, T., Shin, H.S., 2010. Financial Intermediation and Monetary Economics. Federal Reserve Bank of New York Staff Report, No 398 (revised May).
Adrian, T., Moench, E., Shin, H.S., 2010. Macro Risk Premiums and Intermediary Balance Sheet Quantities. Federal Reserve Bank of New York Staff Report, No
Bank of England, 2009. The Role of Macroprudential Policy. Discussion Paper, Novermber.
Bean, C., Paustian, M., Penalver, A., Taylor, T., 2010. Monetary Policy After the Fall. Federal Reserve Bank of Kansas City, Jackson Hole Symposium.
Benigno, P., Woodford, M., 2003. Optimal monetary and fiscal policy: a linear-quadratic approach. In: Gertler, M., Rogoff, K. (Eds.), NBER Macroeconomics
Annual 2003. MIT Press, Cambridge, Mass, pp. 271–332.
Bernanke, B.S., 2004. ‘‘Gradualism”, Speech Delivered at an Economics Luncheon Co-Sponsored by the Federal Reserve Bank of San Francisco (Seattle
Branch) and the University of Washington, Held in Seattle, 20 May.
Bernanke, B.S., 2008. Policy coordination among central banks. In: Speech Given at the Fifth European Central Bank Central Bank Conference: The Euro at
Ten: Lessons and Challenges, Frankfurt, Germany, November 14, 2008 <https://www.federalreserve.gov/newsevents/speech/bernanke20081214a.htm>.
Bernanke, B.S., 2010. Monetary policy and the housing bubble. In: Speech Given at the Annual Meeting of the American Economic Association, Atlanta,
Georgia, 3 January 2010 <http://www.federalreserve.gov>.
Bernanke, B.S., Gertler, M., 1999. Monetary policy and asset price volatility. In: Federal Reserve Bank of Kansas City conference New Challenges for Monetary
Policy. Kansas City, pp. 77–128.
Bernanke, B.S., Gertler, M., 2001. Should central banks respond to movements in asset prices? Am. Econ. Rev., 91(May), 253–257 (Papers and Proceedings).
Bernanke, B.S., Gertler, M., Gilchrist, S., 1999. The financial accelerator in a quantitative business cycle framework. In: Taylor, J.B., Woodford, M. (Eds.),
Handbook of Macroeconomics, vol. 1(Part 3). North-Holland, Amsterdam, pp. 1341–1393.
Blanchard, O., Dell’Ariccia, G., Mauro, P., 2010, Rethinking Monetary Policy. IMF Staff Position Note, 12 February, SPN/10/03.
Boivin, J., Lane, T., Meh, C., 2010. Should monetary policy be used to counteract financial imbalances? Bank Canada Rev. (Summer), 23–36
Borio, C., English, W.B., Filardo, A.J., 2003. A Tale of Two Perspectives: Old or New Challenges for Monetary Policy? BIS Working Paper, No 127, Bank for
International Settlements, Basel, February.
Borio, C., Lowe, P., 2002. Asset Prices, Financial and Monetary Stability: Exploring the Nexus. BIS Working Paper, No 114, Bank for International Settlements,
Basel, July.
Borio, C., Zhu, H., 2008. Capital Regulation, Risk-Taking and Monetary Policy: A Missing Link in the Transmission Mechanism? BIS Working Paper, No 268,
Cargill, T.F., Hutchison, M.M., Ito, T., 1995. Lessons from financial crisis: the Japanese case. In: Proceedings, Federal Reserve Bank of Chicago, May, pp. 101–
Carney, M., 2012. Remarks to the CFA Society Toronto, Toronto, Ontario, December 11th <http://www.bis.org/review/r121212e.pdf>.
Cecchetti, S., Genberg, H., Lipsky, J., Wadhwani, S., 2000. Asset Prices and Central Bank Policy. Geneva Reports on the World Economy, No 2. Centre for
Economic Policy Research, London, July.
Clarida, R., Galí, J., Gertler, M., 1998. Monetary policy rules and macroeconomic stability: evidence and some theory. Quart. J. Econ. 115 (February), 147–180.
Clarida, R., Galí, J., Gertler, M., 1999. The science of monetary policy: a new Keynesian perspective. J. Econ. Lit. 37 (December), 1661–1707.
Cochrane, J.H., 2009. How Did Paul Krugman Get It So Wrong? University of Chicago manuscript, 16 September <http://faculty.chicagobooth.edu/
Coenen, G., Orphanides, A., Wieland, V., 2004. Price stability and monetary policy effectiveness when nominal interest rates are bounded at zero. Adv.
Macroecon. 4 (1).
Curdia, V., 2016. Is a Case for Inflation Overshooting? Federal Reserve Bank of San Francisco, Economic Letter, 2016–04, February 16.
Delis, M.D., Kouretas, G., 2010. Interest Rates and Bank Risk-Taking. Munich Personal RePEc Archive, MRPA Paper No 20132, January.
Ditmar, R., Gavin, W.T., Kydland, F.E., 1999. The inflation-output variability tradeoff and price level targets. Federal Reserve Bank St. Louis Rev., 23–31
Ditmar, R., Gavin, W.T., Kydland, F.E., 2000. What do new-keynesian phillips curves imply for price-level targeting. Federal Reserve Bank St. Louis Rev., 21–
Dupor, B., 2005. Stabilizing non-fundamental asset price movements under discretion and limited information. J. Monet. Econ. 52 (May), 727–747.
The Economist, 2009. The Other-Worldly Philosophers, 16 July available at <http://www.economist.com/node/14030288>.
Eggertsson, G.B., Woodford, M., 2003. The zero bound on interest rates and optimal monetary policy. Brook. Pap. Econ. Activity 1, 139–211.
Eggertsson, G.B., Woodford, M., 2004. Policy options in a liquidity trap. Am. Econ. Rev. 94 (2), 76–79.
English, W.B., Nelson, W.R., Sack, B.P., 2003. Interpreting the significance of the lagged interest rate in estimated monetary policy rules. Contrib. Macroecon.
3, (1) 5.
Erceg, C.J., Henderson, D.W., Levin, A.T., 2000. Optimal monetary policy with staggered wage and price contracts. J. Monet. Econ. 46 (October), 281–313.
Feroli, M., Greenlaw, D., Hooper, P., Mishkin, F.S., Sufi, A., 2016. Language After Liftoff: Fed Communication Away from the Zero Lower Bound. U.S. Monetary
Policy Forum, February 26, 2016 <https://research.chicagobooth.edu/igm/usmpf/2016.aspx>.
F.S. Mishkin / Journal of International Money and Finance 73 (2017) 252–274 273

Fischer, S., 2015. Macropurdential Policy in the U.S. Economy. In: Speech Given at the ‘‘Macroprudentail Monetary Policy” 59th Economic Conference,
Federal Reserve Bank of Boston, Boston Mass., October 2, 2015 <http://www.federalreserve.gov/newsevents/speech/fischer20151002a.htm>.
Friedman, M., 1963. Inflation: Causes and Consequences. Asia Publishing House, New York.
Frankel, J.A., 2016. International Coordination. NBER Working Paper 21878 (January).
French, K.R. et al, 2010. The Squam Lake Report: Fixing the Financial System. Princeton University Press, Princeton, NJ.
Gambacorta, L., 2009. Monetary policy and the risk-taking channel. BIS Quart. Rev. (December), 43–53
Giannoni, M.P., Woodford, M., 2005. Optimal inflation-targeting rules. In: Bernanke, B.S., Woodford, M. (Eds.), Inflation Targeting. University of Chicago
Press, Chicago, pp. 93–172.
Goodfriend, M., King, R.G., 1997. The new neoclassical synthesis and the role of monetary policy. In: Bernanke, B.S., Rotemberg, J.J. (Eds.), NBER
Macroeconomics Annual. MIT Press, Cambridge, Mass., pp. 231–283.
Greenlaw, D., Hamilton, J.D., Hooper, Mishkin, F.S., 2013. Crunch Time: Fiscal Crises and the Role of Monetary Policy. U.S. Monetary Policy Forum (Chicago:
Chicago Booth Initiative on Global Markets, 2013), pp. 5–60.
Greenspan, A., 2002. Opening Remarks. In: Federal Reserve Bank of Kansas City Economic Symposium Rethinking Stabilization Policy, pp. 1-10.
Greenspan, A., 2003. Opening Remarks. In: Federal Reserve Bank of Kansas City Economic Symposium Monetary Policy and Uncertainty: Adapting to a
Changing Economy, pp. 1–7.
Gruen, D., Plumb, M., Stone, A., 2005. How should monetary policy respond to asset price bubbles? Int. J. Central Bank. 1 (December), 1–31.
Hamilton, J.D., 1989. A new approach to the economic analysis of nonstationary time series and the business cycle. Econometrica 57 (March), 357–384.
Ioannidou, V., Ongena, S., Peydro, J-L., 2009. Monetary Policy, Risk-Taking and Pricing: Evidence from a Quasi-Natural Experiment. European Banking Centre
Discussion Paper, No 2009-04S.
Jimenez, G., Ongena, S., Peydro, J-L., Saurina, J., 2008. Hazardous Times for Monetary Policy: What Do Twenty-Three Million Bank Loans Say About the
Effects of Monetary Policy on Credit Risk-Taking? Working Paper No 0833, Bank of Spain.
Jinushi, T., Kuroki, Y., Miyao, R., 2000. Monetary policy in japan since the late 1980s: delayed policy actions and some explanations. In: Mikitani, R., Posen, A.
S. (Eds.), Japan’s Financial Crisis and Its Parallels to U.S. Experience. Institute for International Economics, pp. 115–148.
Kashyap, A.K., Stein, J.C., 1994. Monetary policy and bank lending. In: Mankiw, N.G. (Ed.), Monetary Policy, National Bureau of Economic Research, Studies in
Business Cycles, vol. 29. University of Chicago Press, Chicago, pp. 221–256.
Keister, T., 2010. Bailouts and Financial Fragility. Federal Reserve Bank of New York unpublished manuscript.
Kim, C.-J., Morley, J., Piger, J., 2005. Nonlinearity and the permanent effects of recessions. J. Appl. Economet. 20 (2), 291–309.
Kim, C-J., Nelson, C., 1999. Has the U.S. economy become more stable? a bayesian approach based on a markov-switching model of the business cycle. Rev.
Econ. Stat. 81 (November), 608–616.
Kindleberger, C.P., 1978. Manias, Panics, and Crashes: A History of Financial Crises. Basic Books, New York.
King, R.G., Wolman, A.L., 1999. What should the monetary authority do when prices are sticky? In: Taylor, J. (Ed.), Monetary Policy Rules. University of
Chicago Press, Chicago, pp. 349–398.
Kohn, D., 2006. Monetary policy and asset prices. In: Speech Delivered at ‘‘Monetary Policy: A Journey from Theory to Practice”, a European Central Bank
Colloquium Held in Honour of Otmar Issing, Frankfurt, 16 March.
Krugman, P., 2009. How Did Economists Get it So Wrong? New York Times Magazine, 2 September <http://www.nytimes.com>.
Levin, A.T., Onatski, A., Williams, J.C., Williams, N., 2005. Monetary policy under uncertainty in micro-founded macroeconometric models. In: Gertler, M.,
Rogoff, K. (Eds.), NBER Macroeconomics Annual 2005. MIT Press, Cambridge, Mass., pp. 229–287.
Lucas, R.E., 2009. In Defense of the Dismal Science. The Economist, 6 August <http://www.economist.com/node/14165405>.
Mishkin, F.S., 2001. The transmission mechanism and the role of asset prices in monetary policy. In: Aspects of the Transmission Mechanism of Monetary
Policy, Focus on Austria 3–4/2001. Osterreichische Nationalbank, Vienna, pp. 58–71.
Mishkin, F.S., 2004. Can central bank transparency go too far? In: The Future of Inflation Targeting, Reserve Bank of Australia: Sydney 2004), pp. 48-65.
Mishkin, F.S., 2007. Housing and the monetary transmission mechanism. In: Finance and Economics Discussion Series, No. 2007–40, Board of Governors of
the Federal Reserve System, Washington, September <www.federalreserve.gov>.
Mishkin, F.S., 2008a. Whither federal reserve communication. In: Speech Delivered at the Peterson Institute for International Economics, Washington, DC,
28 July <www.federalreserve.gov>.
Mishkin, F.S., 2008b. Monetary policy flexibility, risk management, and financial disruptions. In: Speech Given at the Federal Reserve Bank of New York, New
York, NY, 11 January <http://www.federalreserve.gov/newsevents/speech/mishkin20080111a.htm>.
Mishkin, F.S., 2009. Will monetary policy become more of a science? In: Deutsche Bundesbank (Ed.), Monetary Policy Over Fifty Years: Experiences and
Lessons. Routledge, London, pp. 81–107.
Mishkin, F.S., 2010a. The Economics of Money, Banking, and Financial Markets, 11th ed. Boston Addison-Wesley, Boston.
Mishkin, F.S., 2010b. Monetary policy flexibility, risk management, and financial disruptions. J. Asian Econ. (23), 242–246
Mishkin, F.S., 2011. Over the cliff: from the subprime to the global financial crisis. J. Econ. Perspect. 25 (1), 49–70.
Orphanides, A., 2003. The quest for prosperity without inflation. J. Monet. Econ. 50 (April), 633–663.
Posen, A.S., 2003. It takes more than a bubble to become Japan. In: Richards, A., Robinson, T. (Eds.), Asset Prices and Monetary Policy. Reserve Bank of
Australia, Sydney, pp. 203–249.
Poloz, S., 2014. Integrating Uncertainty and Monetary Policy-Making: A Practitioner’s Perspective. Bank of Canada Discussion Paper, October.
Rajan, R.G., 2005. Has Financial Development Made the World Riskier? The Greenspan Era: Lessons for the Future. Federal Reserve Bank of Kansas City,
Kansas City, pp. 313–369.
Rajan, R., 2006. Has finance made the world riskier? Euro. Finan. Manage. 12 (4), 499–533.
Reifschneider, D., Williams, J.C., 2000. Three lessons for monetary policy in a low-inflation era. J. Money, Credit Bank. 32 (November (Part 2)), 936–966.
Reinhart, C.M., Reinhart, V.R., 2010. ‘‘After the Fall”, Macroeconomic Challenges: The Decade Ahead, Federal Reserve Bank of Kansas City Economic
Symposium <http://www.kansascityfed.org>.
Reinhart, C.M., Rogoff, K.S., 2009. This Time Is Different: Eight Centuries of Financial Folly. Princeton University Press, Princeton, NJ.
Rotemberg, J., Woodford, M., 1997. An optimization-based econometric framework for the evaluation of monetary policy. In: Bernanke, B.S., Rotemberg, J.J.
(Eds.), NBER Macroeconomics Annual 1997. MIT Press, Cambridge, Mass, pp. 297–346.
Sack, B., 2000. Does the fed act gradually? A VAR analysis. J. Monet. Econ. 46 (August), 229–256.
Sargent, T.J., Wallace, N., 1981. Some unpleasant monetarist arithmetic. Federal Reserve Bank Minneapolis Quart. Rev. 5 (3), 1–17. Fall.
Schmitt-Grohé, S., Uribe, M., 2005. Optimal fiscal and monetary policy in a medium-scale macroeconomic model. In: Gertler, M., Rogoff, K. (Eds.), NBER
Macroeconomics Annual 2005. MIT Press, Cambridge, Mass., pp. 383–425.
Smets, F., Wouters, R., 2003. An estimated dynamic stochastic general equilibrium model of the euro area. J. Euro. Econ. Assoc. 1 (September), 1123–1175.
Svensson, L.E.O., 1997. Optimal inflation targets, ‘conservative’ central banks, and linear inflation contracts. Am. Econ. Rev. 87 (March), 98–114.
Svensson, L.E.O., 1999. Price-level targeting versus inflation targeting: a free lunch. J. Money, Credit Bank. 31, 277–295.
Svensson, L.E.O., 2001. The zero bound in an open economy: a foolproof way of escaping from a liquidity trap. Monet. Econ. Stud. 19 (S1), 277–312.
Svensson, L.E.O., 2005. Monetary policy with judgment: Forecast targeting. Int. J. Cent. Bank. 1 (1), 1–54.
Swanson, E.T., Williams, J.C., 2014. Measuring the effect of the zero lower bound on medium- and longer-term interest rates. Am. Econ. Rev. 104 (10), 3154–
Taylor, J., 2007. Housing and Monetary Policy, Housing, Housing Finance and Monetary Policy. Federal Reserve Bank of Kansas City, Kansas City, pp. 463–
274 F.S. Mishkin / Journal of International Money and Finance 73 (2017) 252–274

Tinbergen, J., 1939. Business Cycles in the United States of America: 1919–1932, Statistical Testing of Business Cycle Theories, vol. 2. League of Nations,
Tirole, J., Farhi, E., 2009. Collective Moral Hazard, Maturity Mismatch and Systemic Bailouts. NBER Working Paper Series, No 15138.
Turner, P., 2010. Central Banks and the Financial Crisis. BIS Papers, No 51. Bank for International Settlements, Basel, pp. 21–25.
Vestin, D., 2000. Price Level Targeting Versus Inflation Targeting in a Forward Looking Model. mimeo. IIES, Stockholm University, May.
Vestin, D., 2006. Price-level versus inflation targeting. J. Monet. Econ. 53 (7), 1361–1376.
White, W., 2004. Making macroprudential concerns operational. In: Speech Delivered at a Financial Stability Symposium organised by the Netherlands
Bank, Amsterdam, 26 October available at <http://www.bis.org>.
White, W.R., 2009. Should Monetary Policy ‘Lean or Clean’? Federal Reserve Bank of Dallas Working Paper, No 34, August.
Williams, J.C., 2014. Monetary Policy at the Zero Lower Bound: Putting Theory into Practice. Hutchins Center on Fiscal and Monetary Policy at Brookings,
January 16.
Wilson, L., Wu, Y.W., 2010. Common (Stock) sense about risk-shifting and bank bailouts. Fin. Markets. Portfolio Mgmt. 24 (1), 3–29.
Woodford, M., 2003. Interest and Prices: Foundations of a Theory of Monetary Policy. Princeton University Press, Princeton.
Woodford, M., 2012. Methods of policy accommodation at the interest-rate lower bound. In: The Changing Policy Landscape. Kansas City, Federal Reserve
Bank of Kansas City, pp. 185–288.