Vous êtes sur la page 1sur 31

CAPITAL MARKETS RESEARCH

JUNE 2012

MODELING
METHODOLOGY

Public Firm Expected Default Frequency (EDFTM )


Credit Measures: Methodology, Performance, and
Model Extensions

Authors
Zhao Sun, PhD, CFA
David Munves, CFA
David T. Hamilton, PhD

Contact Us
Americas
+1.212.553.1658
clientservices@moodys.com
Europe
+44.20.7772.5454
clientservices.emea@moodys.com
Asia (Excluding Japan)
+85 2 2916 1121
clientservices.asia@moodys.co
Japan
+81 3 5408 4100
clientservices.japan@moodys.com

Summary
Moodys Analytics public firm Expected Default Frequency (EDFTM) metrics are forward-looking
probabilities of default, available on a daily basis for over 30,00 corporate and financial entities. The
public firm EDF model belongs to a class of credit risk models referred to as structural models. Their basic
assumption is that there is a causal, economically motivated reason why firms default. Default is highly
likely to occur when the market value of a firms assets is insufficient to cover its liabilities at some future
date in other words, when it is insolvent. This balance sheet approach to measuring risk means that the
EDF model shares common ground with fundamental credit analysis. However, a big advantage of EDF
measures over the traditional approach is that they utilize market information, so they provide both
timely warning of changes in credit risk and an up-to-date view of a firms value. In this paper we present
the methodology behind the EDF model on both an intuitive and a rigorous quantitative basis. We also
review the historical performance of the model, and discuss recently developed EDF extensions that were
developed for applications such as calculating regulatory capital and economic stress testing.
The Median EDF for North American Firms that Defaulted 2008-2010

CAPITAL MARKETS RESEARCH

MAY 2012

CAPITAL MARKETS RESEARCH

Table of Contents
1 Introduction

2Measuring Default Risk


2.1 Assets, Liabilities, and Negative Net Worth
2.2 The Public EDF Model in Action

5
5
6

3Public EDF Model Mechanics


3.1 The Theoretical Underpinnings of the EDF Model
3.2 Estimating Asset Value and Asset Volatility

8
8
11

3.2.1

The Economic Intuition of the Asset Value and Asset Volatility Equations

11

3.2.2

Estimating Empirical Asset Volatility

12

3.2.3 Modeled Asset Volatility

3.3
3.4

Calculating the Default Point


Moving from DD to EDF

13

14
15

4The Term Structure of Default Risk


4.1 Building EDF Term Structure Using DD Migration
4.2 Extrapolating EDF Term Structure Beyond Five Years

16
17
18

5Public EDF Model Performance


5.1 Rank Order Power
5.2 Level Validation
5.3 Early Warning Power

18
18
19
20

6Conclusion

21

Appendix A
Extensions to the Public EDF Model
A.1 Credit Default Swap Implied (CDS-I) EDF Measures
A.2 Through-the-Cycle EDF (TTC EDF) Measures
A.3 Stressed EDF Measures

23
23
25
27

Appendix B

29

JUNE 2012

Measuring Rank Order Power Using the Cumulative Accuracy Profile and Accuracy Ratio

CAPITAL MARKETS RESEARCH

Introduction

Measuring a firms probability of default is one of the central problems of credit risk analysis. Probabilities of default (PDs) are fundamental
inputs into many types of credit risk management processes at the single name and portfolio level, as well as in the pricing and hedging of
credit risk. Moodys Analytics public firm Expected Default Frequency (EDFTM) model has been the industry leading PD model since it was
introduced in the early 1990s (see box below). Since that time, the model has undergone considerable development, and continues to
provide unequalled coverage of public firms globally on a daily basis.
The EDF model is premised on the assumption that there is a causal, economically motivated reason why firms default. It estimates firms
probabilities of default by directly modeling the evolution of the economic variables driving defaults. The two main factors determining a
firms default risk in the EDF model are its financial risk and its business risk. The model therefore takes essentially the same approach to
credit risk as fundamental credit analysis. However, what distinguishes the EDF model is its market-based valuation of the economic drivers.
Financial markets are valuation engines that attempt to estimate the values and risks of companies as efficiently as possible. The
incorporation of market information into the EDF model gives it a high degree of early warning power for predicting defaults relative to
fundamental, financial statements-based analysis.
In this modeling methodology we first describe the theory behind the public EDF model, as well as its principal calculations. In Section 2 we
outline the theoretical basis of the EDF model, without resorting to a quantitative explanation. Readers with some familiarity with structural
credit risk models may choose to skip this section and head straight to Section 3, where we detail the mathematical structure, modifications,
and the theoretical and practical refinements employed by the EDF model relative to the basic structural model. The discussion up to this
point focuses on the construction of one-year EDFs. In Section 4 we describe how we build the term structure of EDFs to extend the default
prediction horizon from one year to ten years. In Section 5 we evaluate the historical performance of the EDF model, presenting both the
validation methodology and the performance statistics for a representative sample. Section 6 concludes.

The Public EDF Model: History and Facts


Public firm Expected Default Frequency (EDF) metrics are industry-leading probability of default estimates for publicly traded companies.
Moodys Analytics, a sister company to Moodys Investors Service, produces EDF measures for over 30,000 entities on a daily basis. The
firms are located in 98 countries, and run the gamut of corporate and financial sectors. We also produce term structures of EDF credit
measures ranging from one year to ten years. EDFs were first produced by KMV in the early 1990s. KMV was acquired by Moodys
Corporation in 2002, and was subsequently renamed Moodys KMV.
The EDF model1 belongs to the class of structural credit risk models, which was pioneered in the 1970s by Robert Merton.2 As we explain in
Sections 3.2 to 3.4, the model is a substantial modification of Mertons original approach it incorporates more realistic assumptions and
enriches the model with empirical observations that better reflect real-world default dynamics.
EDF metrics range from 1 bp to 35%. They are cardinal measures of credit risk, i.e., they provide absolute default probability estimates rather
than just a rank ordering of risk. Thus, a 1% EDF has the same meaning at different stages of the economic cycle. More specifically, one out
of a hundred such entities (with the same one-year EDF) can be expected to default within the next 12 months. In contrast, the realized
default rate associated with a given level on a relative rank-order scale, for example, a Baa2 rating from Moodys Investors Service, will
exhibit significant variation over time.
While the core methodology of the EDF model has remained consistent over the years,3 the model has undergone considerable
improvement over several model version updates. The current version is 8.0. Moreover, we have developed model extensions to meet risk
managers evolving needs. Over the past two years we have introduced three new EDF variants, namely CDS-implied EDFs, Through-theCycle EDFs, and Stressed EDFs. Brief descriptions of the methodologies behind these metrics are provided in Appendix A. Moodys Analytics
also produces EDFs for private firms, i.e., for entities without traded equity. These are delivered via the RiskCalc platform.4 We deliver public
EDFs in a variety of ways. Perhaps best known is our web platform, CreditEdge. EDF are also available through a daily Data File Service
(DFS), an XML interface, and soon, via an Excel add-in.
________________________________

Moodys Analytics also produces private firm EDF models, which are not touched upon in this paper. In the paper we use the two terms the public firm EDF
model and the EDF model interchangeably.

See Merton (1974).

See Crosbie and Bohn (2003) and Dwyer and Qu (2007).

For further information about private firm EDF metrics, please see Korablev et al (2012).

JUNE 2012

CAPITAL MARKETS RESEARCH

2 Measuring Default Risk


2.1 Assets, Liabilities, and Negative Net Worth
A company usually defaults when its assets are worth less than its liabilities. This rule has guided credit professionals down the years, and it
provides the key conceptual underpinning for the public firm Expected Default Frequency (EDF) model. Indeed, the model equates the
probability that a firms assets will be worth less than its liabilities with its probably of default. But this straightforward balance sheet-based
approach hides a number of uncertainties and challenges.

Defining Default
Central to the construction of default prediction models is the
question of what constitutes a default. Theoretical credit risk
models rarely distinguish between different types of defaults,
as if they are all homogenous in nature. However, in practice,
identifying events of default can be challenging, especially in
jurisdictions where public disclosure of such events is neither
required nor practiced. Moreover, defaults vary in severity
across different types of credit events, from a delayed
payment to a filing for bankruptcy. For the purpose of
calibrating a credit risk model, the range of credit events
covered by a default definition directly affects the estimation
of the models parameters.
In the public firm EDF model the four broad types of credit
events are considered defaults:

Missed payment

Bankruptcy, administration, receivership, or legal


equivalent

Distressed debt restructuring

Government bailouts enacted to prevent a credit


event

In the EDF model common equity is assumed to be the only


loss-absorbing layer of a firms capital structure before
default is triggered (see Section 3.2). This means any broken
promises on obligations senior to common equity are also
considered defaults. This default threshold is different from
that pertaining to CDS contracts, which are often written on
senior debt and are more limited in scope. Also, for banks,
the loss buffer used in the EDF model is much thinner than
that represented by a banks total regulatory capital, which
can include preferred shares, hybrid securities and some types
of subordinated debt, in addition to common equity.
Firms that would have defaulted were it not for government
aid received are considered defaults in the EDF model. But
deciding what firms fall in this category can be tricky. For
example, during the financial crisis many US banks were
required to accept bailout funds, even though they were not
considered to be at immediate risk of default. We do not
consider such banks as having defaulted.

First, how do we decide what a companys assets are worth? Look at the
balance sheet is the obvious answer. Analyzing financial statements is
central to traditional credit analysis, but it also has significant limitations.
Assets are carried on a firms balance sheet at their historic acquisition
costs, i.e., their book values, less accumulated depreciation. At the time of
an assets purchase, its cost certainly bears a relationship to its hoped-for
ability to generate cash flow in the future; a company will pay more for an
asset if it thinks it will make a lot of money. But while an assets potential
to generate cash usually changes in subsequent years, its balance sheet
value does not. US automakers underwent a 40-year decline, starting in
the 1970s. Their car manufacturing plants were clearly generating less
cash than they used to. Yet nowhere was this reflected in the companies
balance sheets the factories stated valuations remained serenely
unchanged. Assets can also produce a lot more cash than suggested by
their historic costs. The Coca-Cola Companys secret formula for Coke is
an oft-cited example of this.
Markets provide a better alternative for estimating the value of a firm.
Whatever the asset, markets are forward looking, efficient at assimilating
information, and encapsulate the collective wisdom of many investors.
The problem is that a companys assets do not trade on any market. But
in many cases, its shares do. Conceptually, a firms equity and its assets
are valued in the same way: its asset value is the present value of future
cash flows generated by the assets, while its equity value is the present
value of the portion of total cash flows received by equity holders, i.e.,
after the companys debts are serviced. In this way there is an indirect link
between a companys equity value, which we can measure, and its asset
value, which we cannot.
At least at the theoretical level, using the market value of a companys
assets is an advance over its book values, but it only solves part of the
problem. To assess the probability of default for a firm at some point in
the future, we need to derive the probability that the market value of its
assets will be worth less than its liabilities. This is a function of two
factors. The first is the difference, in dollar terms, between the market
value of a companys assets and the book value of its liabilities as we
discussed above. The second is the volatility of the firms assets. Greater
volatility translates into higher default risk simply because there is a
greater chance that a big downward move in the market value of assets
will put it below the book value of liabilities. Asset volatility reflects a
firms business risk, which is determined by factors such as its industry and
its investment decisions, and is largely independent of financial decisions.
All else being equal, companies with more volatile assets will have greater
probabilities of default. Thus, firms in industries with steady operating
environments, such as utilities or defense, tend to exhibit lower default
risk compared to firms in volatile industries, such as technology. For the
same reasons that a firms asset value is not directly observable, we
cannot measure its asset volatility. Hence, it must also be estimated by

the EDF model.


By now readers will recognize that the public EDF model incorporates the two classic drivers of fundamental credit analysis: financial risk, or
leverage (in terms of the difference between the market value of assets and the book value of liabilities), and business risk, which is measured
by asset volatility (the higher the volatility, the higher the business risk). In addition, the EDF model framework allows for a clear
understanding of the consequences of changes in these two primary drivers on a firms default risk.
5

JUNE 2012

CAPITAL MARKETS RESEARCH

The Corporate Finance Intuition Behind Structural Credit Risk Models


Structural credit risk models such as the public firm EDF models and fundamental credit analysis share many of the same key approaches to
measuring credit risk. Both can be traced back to the analysis of company balance sheets, and they share a common economic intuition in
determining a firms creditworthiness. The key difference between the two lies in their valuation principles market value-based for the
former versus accounting value-based for the latter.
Fundamentally-oriented credit analysts rely on a companys financial reports, and use them to derive a number of key statistics, typically
measuring things like profitability and financial leverage, to predict the likelihood that cash flow generated from the firms assets, either
through regular operations or asset sales, will be sufficient to cover its debt obligations. When the assets cash flows and interest expenses
are converted to their present values, it is essentially a comparison of the firms asset values with its debt values. The fact that financial
reporting is subject to accounting rules increases the objectiveness and comparability of financial ratios. The drawback of this approach is
that historical values are recorded and carried over from one period to the next. These historical values often deviate substantially from their
current or expected values, so any projections derived from historical figures may serve as a poor guide to the companys future prospects.
The fact that company financial reports are updated quarterly at best does not help in providing timely, up-to-date credit assessments.
Structural credit models rely on the same economic intuition embedded in a companys balance sheet: as the equity share of the companys
total assets (i.e., its net worth) becomes thinner, the likelihood of default on its debt increases. Instead of the accounting value-based
balance sheets used in the fundamental analysis, the model essentially reconstructs the firms balance sheet based on its market values, as
described by the right hand side of Figure 1. This approach enjoys a number of advantages. First, a firms future prospects with respect to its
growth, its profitability and the ability to serve its debt are all encapsulated in its market values. The credit analyst no longer bound by
backward-looking accounting statements to extrapolate the companys historical growth into the future. Second, the constant fluctuation
of the firms market value with the arrival of new information in the market allows the analyst to frequently update his assessment of the
firms creditworthiness. Third, the structural model estimates directly the economic variables driving a firms default process, such as its
asset value and asset volatility. As a result, the models output is easy to interpret and diagnose. It also allows for the analysis of pro-forma
results or hypothetical scenarios. In contrast, fundamental credit analysis cannot measure these economic drivers directly the accounting
value-based financial ratios are at best viewed as their proxies. The challenge with the structural credit risk model is that these economic
drivers are not directly observable. A large part of this methodology paper concerns the estimation of these quantities.
Figure 1 Credit Risk from a Balance Sheet Perspective

Accounting Value-Based Balance Sheet

Market Value-Based Balance Sheet

2.2 The Public EDF Model in Action


We use two companies, Johnson & Johnson and RadioShack, to illustrate how the public EDF model works. Figure 2 shows their EDF
measures and drivers. J&Js market value of assets is around six times greater than the book value of its liabilities used in the model,
compared to a ratio of less than two for RadioShack. Within the framework of the EDF model, the portion of a firms liabilities that are
relevant for triggering a default is referred to as the default point, consistent with the assumption that default is likely to occur when the
market value of assets falls below it. Another important concept used in the EDF model is market leverage, which is the ratio of the default
point to the market value of assets. J&J also has a lower level of asset volatility than RadioShack. This reflects J&Js strong market positions
in non-cyclical areas such as pharmaceuticals and healthcare, which contrast strongly with RadioShacks exposure to the fiercely competitive
and highly cyclical consumer electronics sector.

JUNE 2012

CAPITAL MARKETS RESEARCH

Figure 2 EDFs and Key Inputs for Johnson & Johnson and RadioShack (as of April 2012)

Metric/Input
1-year EDF Measure
Default Point (DP)
Market Value of Assets (MVA)
Market Leverage (DP/MVA)
Asset Volatility

J&J
0.01%
$39bn
$236bn
17%
11%

RadioShack
3.58%
$1,042m
$1,834m
57%
24%

Figure 3 shows a stylized version of the mechanics of the EDF model as applied to J&J. Standing in April 2012, we seek to estimate J&Js
probability of default one year from now. As noted, we require three pieces of information to do this, two of which the estimated market
value of J&Js assets and its default point are shown in the Figure. The third piece of information we need is asset volatility. In order to
estimate this, we must make an assumption about the future distribution of J&Js possible asset values. To keep matters simple, we assume
that J&Js future market value of assets is normally distributed.
Figure 3 Key Drivers of Johnson & Johnsons EDF (as of April 2012)

Given the assumption of a normal distribution, the probability of default is calculated as the area under the standard normal distribution
(denoted by ) below the default point, shown in red. More precisely,
Probability of Default = Pr(MVA < DP) = [MVA < DP]
As Figure 3 shows, it would take a six standard deviation move in J&Js market value of assets for it to pierce the default point over the next
year. The EDF model uses the term Distance to Default (DD) for standard deviations, so J&J has a DD of six. In this case the probability of
default over the next twelve months under the normal distribution assumption is essentially zero ((1-99.9999998027%)/2, to be exact.)
The firms EDF credit measure of 0.01% is much higher (in fact,100,000 times higher) than this probability, for two reasons. First, the actual
distribution of Distances to Default has a long and fat tail to the right relative to the normal distribution. This means that the actual
frequency of default is greater than suggested by a normal distribution, especially for high credit quality firms. This partly reflects the fact
that movements of asset values within sectors, and thus defaults, are positively correlated. The EDF model adjusts for the non-normal
distribution of DDs by means of an empirical mapping (i.e., one based on historical data) between DDs and actual default rates obtained
from our proprietary default data base. The second reason is related to the first one: since defaults are so rare for high-quality firms, we are
unable to calibrate an exact empirical relationship between DDs and expected default rates beyond a certain DD level. So we bound the
lower level of EDFs at 0.01%. We discuss these issues further in Section 3.4.
The situation is radically different for RadioShack (Figure 4). Here we can see the effect of a nasty combination of a narrow gap between the
market value of assets and default point, and relatively high asset volatility. It will only take a two standard deviation move for RadioShacks
market value of assets to fall below its default point. This DD of 2 equates to a 2% probability of default, assuming normally distributed
future asset values. But the firms actual EDF measure is 3.58%, reflecting the models adjustment for the non-normal relationship between
DD and the probability of default.

JUNE 2012

CAPITAL MARKETS RESEARCH

Figure 4 Key Drivers of RadioShacks EDF (as of April 2012)

Experienced credit professionals who encounter the public EDF model for the first time often question the lack of familiar inputs, such as
those related to cash flow. Despite the absence of these factors, the model has proven to be very effective at identifying firms at risk of
default, and doing so in a timely manner (see Section 5 for details). There are at least two reasons for this. First, because the EDF model
utilizes market information, it reflects investors expectations of the future value of a firm. Implicit in that expectation are all the factors
that affect it. Second, the structure of the EDF model transforms information from the equity market which does not care about credit risk
per se into credit-relevant signals. In this way, the two driving factors of the public EDF model, market leverage and asset volatility,
summarize almost all the necessary information required to estimate a firms probability of default.
The discussion so far provides the basis for an economic understanding of how the EDF model works. This serves as a common platform for
various implementations of structural credit risk models. Over the years researchers at KMV and Moodys Analytics have combined their
modeling skills, numerical analysis techniques, and expertise in empirical estimation to transform the economic intuition into a powerful
quantitative tool of practical value. In the next section we move from the general model description we have provided so far to a more
quantitatively-based and rigorous review of the model and its workings.

3 Public EDF Model Mechanics


3.1 The Theoretical Underpinnings of the EDF Model
The public firm EDF model represents a significant extension and advancement of the basic structural credit risk model.5 The assumptions of
the basic structural model make it both too unrealistic and too problematic to use in practice. However, it is still worth starting our
description of the EDF model with a discussion of the basic Black-Scholes-Merton structural model, for two reasons. It provides us with a
good underpinning of the basic theory behind the EDF model, and it helps us to understand how much of an advance the EDF model
represents over the standard approach.
The basic structural credit risk model assumes that the market value of a firms assets evolves according to the following stochastic process6

(1)
where is the expected growth rate of the firms asset value, A is the asset volatility, and W stands for a standard Brownian motion. For
simplicity, we assume a time horizon of one year. It follows from the geometric Brownian motion assumption that the annual log asset
value is normally distributed

The basic structural credit risk modeling approach is attributed to Black and Scholes (1973) and Merton (1974).

Geometric Brownian motion, depicted in Equation (1), is used to describe how changes in asset values, , evolve over time. The drift term,
, describes
, describes the uncertainty associated with the path
the expected growth rate of asset value over a short time interval ; the volatility term,
traveled by asset value the instantaneous volatility of the asset return over the short interval is .

JUNE 2012

CAPITAL MARKETS RESEARCH

ln

ln

(2)

This is the origin of the distributional assumptions we made in Figure 3 and Figure 4. It also implies that if we know the value of the
necessary inputs, calculating a probability of default is straightforward. The probability that a normally distributed variable (y) falls below a
given value, call it x, is exactly equal to [x E(y)] / (y)], where is the cumulative standard normal distribution. In this case, the relevant
value of x is the log of default point, ln , and we are interested in the likelihood that a firms asset value (A) falls below its default point
(which we know from the firms book value of liabilities), Pr ln
ln . Plugging these quantities into the cumulative normal
distribution gives

probability of default

ln

(3)

The negative of the quantity inside the brackets is what we previously defined as the distance to default (DD). Using this definition we can
simplify the notation considerably

probability of default

(4)

For ease of exposition, under some innocuous assumptions we can ignore the second term of the numerator of the DD definition to arrive at
a simpler expression for DD
ln

ln

(5)

It is clear from this expression that the distance to default, which, as previously noted, is essentially a count of standard deviations,
encapsulates the aforementioned three key pieces of information: the market value of assets, the default point, and the asset volatility.
Using the terminology of fundamental credit analysis, the numerator of the DD equation captures the firms financial leverage, and the
denominator captures its business risk. All three quantities were shown in Figure 3 and Figure 4. Intuitively, the distance to default of a firm
is the difference between expected asset value at horizon date and the default point, standardized by its business risk. DD summarizes all
this information into a single statistic providing a rank ordering of default risk. The probability of default in the basic structural model is
calculated as the area under the normal distribution below the default point.
In practice, we need to estimate the three primary inputs to the DD calculation. One of the key insights of the structural credit risk modeling
approach is how a companys unobservable asset value and asset volatility can be inferred from observable market information. Specifically,
a firms asset value and asset volatility can be derived from its equity market returns. Essentially, equity holders possess a walk-away option:
if the value of a firm is lower than the value of its liabilities, shareholders can and will let creditors assume ownership of the firm. That
shareholders have limited liability is a key consideration in this regard.
Owning a home with a mortgage loan provides a useful analogy to help explain the theory underlying the public EDF model. Even though it
is common to say that you own your house, this is not really the case, since you do not possess its title.7 The title is held by the lending
bank, and you only gain the title to the house (and thus truly own it) by paying off the mortgage in full. The further the market value of the
house is above the mortgage balance, the less likely you are to default on your mortgage.
Owning the equity of a company is analogous to holding a call option on the companys assets, where the required debt payment at the
horizon date serves as the options strike price. In this framework the value of equity matches the value of the call option. In the simple case
where a firm is financed purely by equity and a zero-coupon bond with a face value X, the Black-Scholes option pricing formula for a call
option explicitly connects the observable equity value to the unobservable asset value and asset volatility. Denoting the value of equity by E,
the expected growth rate of assets by , and the time horizon by T, the Black-Scholes formula for the value of equity is

(6)

where
ln

(7)

This formula makes it clear that the value of a firms equity (E) is connected to is asset value (directly) and asset volatility (indirectly, through
the normal distribution). Unlike a typical option valuation exercise in which the value of the underlying asset of the option is known, in the
EDF model the value of the option, i.e., equity value, is observable, and we seek to uncover the value and volatility of the underlying asset.
Thus, we solve backwards from the option price and volatility to get the implied firm asset value and volatility.
7

Some readers might be unfamiliar with the term title. This is the legal document proving ownership of an asset such as a house or a car.

JUNE 2012

CAPITAL MARKETS RESEARCH

In addition to relating firm equity value to asset value in the foregoing equation (6), one can also utilize Itos lemma to relate the firms
equity volatility E, to its asset volatility, A

(8)

The two functional relationships (6) and (8) enable one to recover, in principle, the unobservable values of A and A from the values of E and
E, which are observable from a public firms equity prices. The estimates of A and A are, in turn, fed into the firms distance to default
formula (5), which, in the basic structural model, leads to probabilities of default estimates using the normal distribution assumption of DD.

Debunking the Myth that EDFs Just Summarize Stock Market Information
Over the years we have sometimes heard the view that EDFs mere summarize stock price changes. This belief is likely motivated by two
observations: equity prices are an important input into the EDF model, and very often firms in financial distress also experience substantial
declines in their stock market values. This belief can, however, be refuted on both theoretical and empirical grounds.
Our description of the basic structural model showed that the EDF model even before adding more realistic assumptions introduces a
structure that transforms equity information into credit-relevant variables, i.e. market leverage and asset volatility. More specifically, the
EDF model incorporates information about firms capital structures. The level of debt not only directly affects the level of default point, but
also impacts how equity values and equity volatilities are translated to asset values and asset volatilities (per equations 6 and 8). Inferring
the latter two quantities entails a deleveraging process, which usually weakens the correlation between equity values and default
probabilities, especially for firms of medium credit quality. To put it differently, it is not equity value that is driving a firms credit quality.
Rather, it is a firms market value-based net worth a notion captured by DD that determines its creditworthiness.
Empirically, we find that EDF measures provide a much higher default prediction power than equity returns.8 Figure 5 below reports the
average (and median) EDF and realized default rate for portfolios sorted by trailing six month cumulative stock returns for North American
corporates. The conventional wisdom suggests that a large decline in a firms market capitalization signals high default risk. Hence, it implies
a negative relationship between cumulative equity returns and subsequent default rate. As shown in Figure 5, the hypothesized negative
relationship does not hold. Firms with extremely poor equity performance do indeed exhibit higher EDFs and higher default rates, but firms
with the best equity returns also exhibit higher default risk than firms with average equity returns. The time series averages of mean and
median EDFs exhibit a similar pattern to realized default rates in their relations to past stock performance. This observation, at a minimum,
suggests that equity returns do not produce consistent default signals compared with EDFs. Their default prediction power is strong when
the entities financial distress is severe enough to manifest itself in the stock market, but it become much weaker for firms of intermediate
credit quality.
Figure 5 EDFs and Default Rates of Portfolios Sorted on Six Month Stock Returns, North American Corporates, 2001-2009
Portfolio
Lowest Return
2
3
4
5
6
7
8
9
Highest Return

Average
Return %
-0.51
-0.26
-0.15
-0.07
-0.01
0.06
0.13
0.21
0.36
0.87

Average
EDF %
14.50
5.16
3.18
2.25
1.85
1.68
1.56
1.66
2.09
3.81

Median
EDF %
11.33
2.04
0.80
0.42
0.28
0.24
0.22
0.26
0.38
0.92

Default
Rate %
9.24
2.07
1.10
0.66
0.51
0.40
0.41
0.39
0.51
0.77

The structural modeling approach offers several analytical benefits. In this framework defaults are explicitly modeled as the result of
interactions among economic drivers, rather than from unpredictable random events, as posited by reduced-form credit models. The cause
of firms financial distress (and soundness) can be traced back to their operating or financing fundamentals. This transparent causal
relationship makes structural model-based credit assessments economically intuitive and easy to interpret.
More importantly, this modeling approach builds a testing ground in which risk managers can produce alternative credit opinions by
filtering/stressing a specific set of economic inputs in a transparent and logically coherent fashion. In contrast, econometric approaches to
8

10

See Sun (2010) for an extensive study of the predictive power of EDFs versus equity returns.

JUNE 2012

CAPITAL MARKETS RESEARCH

default modeling are often built through a data mining exercise to find significant explanatory variables that is, they find the set of
variables that helps predict default, often without a priori economic justification. Additionally, the structural approach is also particularly
well suited to analyzing the credit exposure of a large portfolio and to determining the contribution of individual exposures to overall risk.
In the following subsections we describe how the EDF model estimates the three primary inputs to the DD calculation, as well as how we
calculate a probability of default. As we noted at the outset of this section, although the foundations of Moodys Analytics EDF model are
built upon the basic structural modeling framework, it is a significant extension of and improvement over the simple model described above.
This reflects both its theoretical conception and over 20 years of development.
Amongst other things, the public EDF introduces more realistic features into the theoretical model itself, which provides a better
approximation of real-world firm capital structures and firm behavior. It supplements the theoretical assumptions with practical extensions
that better reflect real-world aspects of credit risk, and uses estimation procedures that produce significantly improved estimates of credit
risk over the basic structural model implementation. A summary of the enhancements over the basic model is provided in Figure 6.
Figure 6 Advances over Basic Structural Credit Models Made by the Public EDF Model
Basic Structural Model

No Cash Payouts
Default occurs only at the horizon date

Theoretical
Modifications

Two classes of Liabilities: Short Term Liabilities and Common Stock

Public EDF Model

Default point is total debt

Gaussian relationship between probability of default (PD) and distance


to default (DD)

Cash Payouts: Coupons and Dividends (Common and Preferred)


Default can occur any time
Default point is empirically determined

Empirical
Advances

Estimation method of asset values and asset volatilities is not specified

Five Classes of Liabilities: Short Term and Long Term Liabilities,


Common Stock, Preferred Stock, and Convertible Stock

Proprietary numerical routine to estimate asset value and asset


volatility
DD-to-EDF mapping empirically determined from calibration to
historical data

3.2 Estimating Asset Value and Asset Volatility


Equations (6) and (8) for convenience, referred to as the asset value equation and the asset volatility equation, respectively show how
asset values and asset volatilities are related to equity values and equity volatilities in the basic structural model. A crucial assumption is
that the firms liabilities consist only of a zero-coupon bond maturing at time T which is also the only time default may occur. In the public
firm EDF model we derive two similar equations after incorporating additional realistic assumptions (to be described shortly) and use them
as the basis for empirical (i.e., data-based) estimations of asset values and asset volatilities. Before we describe how we numerically estimate
asset value and asset volatility, it is worth discussing the analytical insights, as well as additional model realism, offered by our formulation
of the asset value and asset volatility equations.

3.2.1

The Economic Intuition of the Asset Value and Asset Volatility Equations

The EDF models asset volatility equation is the same as equation (6). A quick analysis of the equation reveals that equity volatility is
proportional to asset volatility, with the proportionality factor equal to the product of leverage ratio A E and the hedge ratio E A. The
observation that the proportionality factor is generally greater than one means that the equity volatility of a firm is larger than its asset
volatility.9 This observation makes economic sense, since equity holders receive cash flows only after the companys debt is serviced. So
their cash flows exhibit greater volatility than for the company as a whole (i.e., for its assets). To put it differently, equity volatility is a
function of both a firms underlying business risk, attributed to its investment decisions, and its financial leverage, the result of its financing
decisions. Estimating a firms asset volatility is the process of removing the amplification effect of financial leverage from stock return
volatility, in order to isolate the business risk caused purely by its investment decisions.
Additionally, the proportionality factor is not constant, suggesting a non-constant relationship between equity volatility and asset volatility.
This is due to the fact that the leverage ratio A E, as well as the hedge ratio E A, varies with market valuations of assets and equity. In
general, asset volatility should exhibit slow variation over time (in the basic structural model it is actually assumed to be constant), reflecting
the slow change in a firms risk profile associated with its underlying business. However, a firms equity volatility can exhibit much larger
time variation due to market-perceived changes in financial leverage and the corresponding change in the hedge ratio. Figure 7 illustrates
the difference between asset volatility (as estimated by the EDF model) and equity volatility with respect to both their levels and time
variation for Johnson & Johnson and RadioShack. Multiplying the equity volatility by a constant factor to approximate asset volatility, as is
9

11

Mathematically, this is due to the observation that in equation (6) the hedge ratio, E A, is close to one an increase in the total value of the firm mostly
accrues to equity holders for a reasonably healthy firm while the asset-to-equity ratio A E is larger than 1 with its exact value depending on the degree of
leverage. This means not only that equity volatility is larger than asset volatility in absolute terms, but also that a change in asset volatility will be translated
into a much larger change in equity volatility.

JUNE 2012

CAPITAL MARKETS RESEARCH

done in some other, simpler implementations of the structural model, would likely produce asset volatility estimates that are too volatile in
comparison to empirical observations, resulting in poor forward-looking default probability estimates.
Figure 7 Asset Volatility and 90-day Equity Volatility for Johnson & Johnson and RadioShack

The asset value equation actually used in the EDF model, the counterpart to equation (6), is much more complex, since it relaxes a number
of restrictive assumptions made in the basic structural model. First, the EDF model allows default to occur at any time when a firms asset
value falls below its default point. In the basic structural model, default can only occur at the maturity date of the zero-coupon bond. This
assumption allows a firms asset value to decline below its default point and then recover to a level above the default point before the debts
maturity date. In practice, companies tend to default when their asset values near or fall below their default points, regardless of whether
that time is before the maturity of their debt. The EDF model incorporates this realistic behavior by allowing default to occur the first time a
firms asset value crosses its default point. While this assumption substantially increases the complexity of the theoretical relationship
relating equity and asset values, the improvement in the models ability to identify default risk more than justifies it.
Second, the public EDF model allows for multiple classes of liabilities in a firms capital structure namely 1) short-term liabilities, 2) longterm liabilities, 3) convertible securities, 4) preferred stock, and 5) common stock. This is in contrast to the basic structural In models
assumption that the right side of a firms balance sheet consists of equity and one class of debt. In particular, introducing additional realism
into firms capital structures entails the incorporation of special characteristics of hybrid securities, such as convertible bonds and convertible
preferred stock. Such convertible securities can be converted into equity at specified conversion rates, often at the discretion of the security
holders. Since convertible security holders will elect to exercise their conversion options only when the common share price is sufficiently
high, the firm is effectively selling a portion of the upside return that otherwise belongs to the holders of common stock.10 This means the
equity value of a firm partly financed by convertible securities would be lower than an otherwise identical firm, but one that is financed
entirely by pure debt and common stock. Conversely, in the process of making an inference of asset value based on equity value, ignoring
the conversion feature of convertible securities results in an underestimation of the market value of assets and an overstatement of the
probability of default (holding other factors constant).
A third extension of the EDF model involves the explicit consideration of cash leakages over time in the form of dividends on stock,
coupons on bonds, and interest expense on loans in modeling the evolution of asset values. In the basic structural model cash leakage is
assumed to be zero. In reality, cash leakages impact both default probabilities and the value of debt. For example, consider two firms with
identical assets and debt but one pays a larger dividend. Obviously, the firm paying the larger dividend has a higher default probability, since
less cash is available to service debt.
The asset value and asset volatility of a firm are jointly determined by the system of two equations (6) and (8). Since analytical solutions to
the system of equations do not exist, we employ an empirical procedure to estimate the two quantities (see the sidebar on page 12). In the
rest of this subsection we focus on the estimation of asset volatility, since, first, its estimation is intrinsically difficult, requiring the
estimation of a series of asset returns; and second, DDs are very sensitive to asset volatility a small change in a firms asset volatility can
lead to a large change in its DD (and therefore in its probability of default). Once asset volatility is estimated, one can calculate asset value
by simply plugging the value of asset volatility into the asset value equation.

3.2.2

Estimating Empirical Asset Volatility

To produce stable and informative estimates of a companys asset volatility we developed a numerical procedure that incorporates both
firm-specific asset return information and information from comparable firms. The volatility estimate based exclusively on a firms own
10

12

Bank capital securities form a limited exception to this, in that some can be converted if required by the regulators.

JUNE 2012

CAPITAL MARKETS RESEARCH

asset return information is termed empirical volatility, and the volatility estimate derived from the information of comparable firms is termed
modeled volatility. We combine the two to produce the asset volatility measure used in the EDF model.
We employ an iterative procedure to estimate empirical volatility. Specifically, for an initial guess of asset volatility, one can construct a
time-series of asset values using the asset value equation. Such time series of asset values are used to produce another estimate of asset
volatility. We then compare the two asset volatility estimates to see if the difference is within a small tolerance level. If not, the new
volatility estimate is used to generate another time-series of asset values. We continue the iterative process until the volatility estimates
converge. The asset value (hence, asset return) series so generated form the basis of empirical volatility.
Before calculating empirical volatility we make a number of adjustments, as needed, to the asset return series described above to ensure that
the resulting asset volatility estimate is robust and forward-looking. In addition to standard data cleanup such as trimming outliers, the
asset return series are adjusted for large corporate transactions, such as mergers or acquisitions, which often result in a permanent change in
the volatility of the firms assets. Generally, post-event asset returns are more informative than those from prior to it, so if there has been a
large change in firm size or capital structure, then empirical volatility is computed by placing a larger weight on the post-event asset returns.
Also, the later portion of an asset return series receives more weight than the earlier portion in the calculation of empirical volatility. The
purpose is to increase the forward-looking nature of empirical volatility estimates if there are slow movements in asset volatility, the
weighting scheme will produce volatility estimates that reflect the current state of the firms business risk, rather than its past conditions.

Practical Difficulties Solving for Asset Value and Asset Volatility


Since asset value and asset volatility are jointly determined by the system of equations (6) and (8), estimating the two quantities entails
solving this system of two equations with two unknowns. Although the solution would seem to be a matter of simple algebra, it turns out
that solving this system of equations directly is a nontrivial exercise, and the straightforward approach does not lead to reliable PD measures.
There are a number of challenges with solving the system of equations directly. Firstly, equity volatilities, assumed to be a known variable in
equation (6), are not directly observable, and have to be estimated if one intends to invert the system of equations. There is a vast literature
on the estimation of equity volatilities. Generally, one may choose one of three approaches: 1) calculating the standard deviation of
historical time series of equity returns; 2) using the implied volatilities calculated from at-the-money option prices on common stocks; and
3) modeling the time series variation of equity volatilities with a GARCH-type conditional volatility model, and using the resulting volatility
forecast as estimate of equity volatility.11
In our experience, none of these three methods produces satisfactory estimates of asset volatilities. Method 1) is backward-looking; using
Method 2), one can only produce volatility estimates for a rather small subset of firms for which equity option quotes are available; and the
problem with Method 3) is that the target time horizons for credit model estimation tend to be greater than a year, at which some of the
more notable GARCH-like behavior is less pronounced.
Second, this seemingly innocuous system of equations is invertible. While this suggests that some numerical routines are in order, standard
numerical techniques, such as the Newton-Raphson method, may not work well in practice, either because the resulting asset volatilities are
not forward-looking enough for default prediction or because they are too erratic to be consistent with the economic intuition of slowmoving asset volatility. Indeed, as has been pointed out in the literature,12 the problem with the two equations and two unknowns approach
is that it assumes that equity volatility is a constant, which is inconsistent with one of the structural model assumptions that asset volatility
is a constant.
And lastly, whatever estimation routine we choose should not only be readily available for firms with long history of data records, but also
applicable to newly listed firms, as well as for entities with thinly traded shares. When a firm has too short a history for producing a
statistical estimate of equity volatility and, likely at the same time, has no options traded on its stock to compute implied equity volatility,
the system of two equations actually has three unknowns to solve for an impossible task mathematically.

3.2.3

Modeled Asset Volatility

Empirical volatilities of comparable firms are useful in refining the estimate a firms own empirical volatility. This is why we produce
modeled asset volatility, which, in essence, is an average empirical volatility of firms with similar characteristics. Estimating modeled
volatility involves three steps. First, we group firms into buckets according to geographic region and 17 industry sectors. This produces
generally homogenous groups. Second, within each region-sector bucket we fit a nonlinear regression of empirical volatility on variables
measuring size, profitability, and growth. We then aggregate the residuals of the regression to produce an estimate of the fixed effect for a
11

The class of GARCH (Generalized Autoregressive Conditional Heteroskedasticity) models (see, for example, Bollerslev (1986) and Bollerslev, Chou, and
Kroner (1992) is used to describe the movement of volatility through time. It is commonly observed that volatility is not constant stock market volatility
increases during crises and then decreases in due course. The GARCH-type model explicitly takes into account the fact that the variance of return, conditional
on the information in previous returns, is found to be dependent on this information.

12

13

See Ericsson and Reneby (2005)

JUNE 2012

CAPITAL MARKETS RESEARCH

given region-sector. In the third step we combine the region-sector fixed effect with the fitted value of the volatility regression to produce
the modeled volatility estimate. The empirical volatility for a given regional/sector bucket changes over time in response to shifting business
conditions. To account for this time variance, modeled volatility is recalibrated each month so that on average modeled volatility is equal to
empirical volatility. In this way, modeled volatility neither increases nor decreases changes in aggregate volatility that may occur as the
result of evolving business conditions.
When we combine the estimates of empirical volatility and modeled volatility, the weight placed on the former is determined by the length
of the time series of equity returns that is used in estimating empirical volatility. So the asset volatility of a new firm used in the model is
heavily based on information of its peers, whereas the asset volatility of a firm with a long history of equity prices relies much more on its
own history. In addition, the weighting scheme used is also a function of the firms business mix between financial and non-financial sectors.
Empirical volatilities are given higher weight for firms with larger shares of their business in the financial sector. Even though the weighting
function itself is time independent, the weight of empirical volatility for a given firm may change over time as its equity price history is
accumulated and/or its business mix changes. For 100% non-financial firms, the weight of empirical volatility is capped around 70% (80%
for 100% financial firms).
There are a number of benefits associated with incorporating modeled volatility into the asset volatility calculation. First, it enhances the
stability of asset volatility estimates by reducing the noise contributed by firm-specific return series. Second, it improves the forwardlooking power of the resulting PD estimates. Our research shows that the EDF model performs better than option implied volatility-based
PD estimates in terms of rank ordering power. The restricted sample for which option prices are available brings another immediate
advantage of this combined estimate: it can be readily applicable to all publicly listed firms regardless the length of their histories and the
depth of trading.

3.3 Calculating the Default Point


At a conceptual level the default point represents the amount of a firms liabilities due at a given time horizon that would trigger a default if
not paid according to their contractual terms. In the basic structural model described earlier, the determination of the default point is
straightforward it is simply the face value of a zero-coupon bond, since this is the only liability of the firm, and its maturity is the same as
the default horizon. In practice, firms liabilities are often comprised of multiple classes of debt with various maturities, and potential default
events are not limited to a specific maturity date. This means that the same amount of total liabilities for two otherwise identical firms may
have different required debt payments for a given time horizon. So the default point is a function of both the prediction horizon and the
maturity structure of liabilities.13 Additionally, there are balance sheet items that are included in a firms stated liabilities, but that do not
have the potential to force it into bankruptcy. Examples include deferred taxes and minority interests.
The discussion so far suggests that equating a firms total liabilities with its default point would not produce an optimal estimate of the latter
for the purpose of default prediction. Since there is no uniform template that can be used to describe all firms liability structures, the EDF
model strives to find a parsimonious representation of firms default points for a given time horizon, with the goal of maximizing the rank
ordering power of distance-to-default (recall that the task of identifying the proper level of the EDFs is left to the DD-to-EDF calibration).
We utilize empirical methods to achieve this goal.14
Estimating default points is a very tricky exercise. By analyzing our large corporate default database we have learned that firms do not
always default when their asset values fall to the value of their total liabilities they were able to stay afloat even though they started to
have difficulty meeting their long-term obligations. On the other hand, some firms defaulted before their asset values fell to the book value
of their short-term liabilities. In distress, many of a firms long-term liabilities become short-term liabilities as covenants are broken and
debt is restructured. Partly to address these factors we employ separate algorithms for nonfinancial firms and financial firms, with the goal
of maximizing the models default prediction power.
For non-financial firms, the default point for a one-year time horizon is set at 100% of short-term liabilities plus one-half of long-term
liabilities. For financial firms, it is difficult to differentiate between long-term and short-term liabilities. As a consequence, the approach
used in the EDF model is to specify the default point as a percent of total adjusted liabilities. This percentage differs by subsector (e.g.,
commercial banks, investment banks, and nonbank financial institutions).
For default prediction horizons longer than one year, we calibrate the default points to match projected debt maturities. This means that
obligations with longer maturities have higher weights in the default point calculation as the time horizon is extended. This is important, not
only because long-term debt will become short-term liabilities as time passes, but also because the expected growth rate of asset values will
have an increasing impact on the firms long-term DD. A static default point without adjustment to default horizon will systematically bias
13

In addition, the default point is also affected by the definition of default, i.e., what credit events are considered as defaults. For a given DD to PD mapping, a
broader definition of default implies higher default point. So given our default definition, as described in the sidebar on page 5, the goal becomes choosing
the appropriate default point that maximizes the rank ordering power of the model.

14

It is often argued that the level and structure of companies liabilities, and hence their default points, are subject to managerial discretion based on both the
external economic environment and on the firms financial health and investment strategies. This aspect of default point behavior can reduce (or increase)
the likelihood of default for a given level of DD, especially when a firms DD is small and it is in distress. The DD-to-PD mapping, to be discussed next, helps
address this issue. But the dynamic behavior of the default point does not alter the fact that properly estimated DDs are a very effective credit metric for
measuring firms credit worthiness.

14

JUNE 2012

CAPITAL MARKETS RESEARCH

long-term DDs up and, hence, long-term EDFs down.

3.4 Moving from DD to EDF


The distance to default provides an effective rank ordering statistic to distinguish firms likely to default from those less likely to default. We
have verified its effectiveness by observing a strong empirical relationship between DDs and observed default rates: firms with larger DDs are
less likely to default. However, one still needs to take a further step to derive PD estimates.
In the basic structural credit risk model DDs are normally distributed as a result of the geometric Brownian motion assumption used to
model the dynamics of asset values. However, actual default experience departs significantly from the predictions of normally distributed
DDs. For example, when a firms DD is greater than 4, a normal distribution predicts that default will occur 6 in 100,000 times. Given that
the median DD of the entire sample of firms in the EDF dataset is not far from 4, this would lead to about one half of actual firms being
essentially default risk-free. This is highly improbable.
Instead of approximating the distribution of DDs with a standard parametric distributional function, the EDF model constructs the DD-to-PD
mapping based on the empirical relationship (i.e., the relationship evidenced by historical data) between DDs and observed default rates.
Moodys Analytics maintains the industrys leading default database, with over 8,600 defaults as of the end of 2011. The process for deriving
the DD-to-EDF empirical mapping begins with the construction of a calibration sample large North American corporate firms for which
we have the most reliable default data. It is reliable in the sense that hidden defaults defaults that occurred, but that were neither
reported nor observed are relatively less likely to cause estimation errors. The DD-to-EDF mapping is created by grouping the calibration
sample into buckets according to the firms DD levels, and fitting a nonlinear function to the relationship between DDs and observed default
frequencies for each bucket. A stylized version of the resulting DD-to-EDF mapping is plotted in Figure 8 in green, along with the DD-to-PD
mapping (the orange line) implied by a normal distribution of DDs.
The differences between the PD calculated using normally distributed DDs and the DD-to-EDF mapping is obvious on both ends of the credit
spectrum. On one hand, the observed default rates for firms with medium to high credit quality are significantly higher than implied by the
normal distribution. The default rates may be small in absolute terms in both mappings as DD increases, but their difference can be of
several orders of magnitude, as illustrated by the Johnson & Johnson example cited earlier. This effect is caused by the right-skewed
empirical DD distribution, which has a thick and long right tail relative to the standard normal distribution. On the other hand, EDF
measures for poor credit quality (i.e., low DD) firms are much lower than the PDs predicted by a normal distribution of DDs. This divergence
can partly be explained by economic reasons. A companys management often adjusts a firms financial leverage to increase the value of the
firm, and the incentive of such managerial discretion is likely to be stronger when the firm is in financial distress. Such firms usually take any
step that they can to avoid default, such as maximizing the drawdowns on their lines of credit and selling non-core assets. This means that
firms with small DDs often dont default as quickly or as often as might be expected. Additionally, the empirical mapping approach places a
natural constraint on the EDFs of firms with low DDs. The majority of defaults cluster around small intervals of DDs, and the noise in the
data prevents us from fitting a robust monotonic relationship between DDs and default rates. This is why we cap the EDF scale at 35%. This
data constraint also applies to the other end of credit spectrum. We do not observe any defaults for firms with very large DDs, so the floor
of the EDF scale is set at 1 bp.
Figure 8 From DD to PD: Empirical Mapping vs. the Normal Distributional Assumption
70%
DD-EDF Empirical Mapping

Probability of Default

Black-Scholes-Merton

DD = 4 maps to a 0.003% PD
in the simple BSM model, but
to a 0.4% EDFTM metric

35%

0%
0

4
Distance-to-Default

Our empirical calibration of the DD-to-PD mapping is preferable to the standard parametric distributional assumptions since it captures
aspects of actual default experience that are difficult to correctly specify in a purely theoretical setting. As described above, one benefit of
an empirical mapping is that it provides realistic default probability estimates when a firms net worth is close to zero. An additional benefit
of using an empirical DD-to-EDF mapping is that it accommodates the existence of jumps-to-default. These are changes in asset values
during a short time window are not always small and continuous, as described by the Brownian motion assumption of asset value. Firms

15

JUNE 2012

CAPITAL MARKETS RESEARCH

may jump to default when their asset values experience a sharp decline due to, for example, corporate fraud (e.g., Enron) or a sudden
collapse of the their business environment. While asset value processes can be modified to explicitly incorporate sudden jumps,15 estimating
the jump parameters, such as intensity and magnitude, is a considerable challenge, since jumps are very rare and unpredictable by nature.
However, an empirically calibrated DD-to-PD mapping allows us to indirectly account for jump-to-default risk in an effective way.

4 The Term Structure of Default Risk


Term structures of default probabilities are necessary ingredients for pricing, hedging and risk management of long term obligations. In
particular, portfolio models of credit risk, such as Moodys Analytics RiskFrontier platform, require term structures of PDs as key inputs for
the valuation of long-dated credit portfolios. The EDF model produces not just a one-year horizon EDF for each firm, but also a term
structure of EDF measures at horizons of up to ten years. The EDF term structure can be expressed in multiple ways, analogous to the term
structure of interest rates. (The sidebar on p.176 describes the terminology used in the EDF term structure.) In the rest of the discussion we
describe the EDF term structure in terms of annualized t-year horizon EDF measures.

Terminology of EDF Term Structure


One can start the construction of EDF term structure with the cumulative t-year EDF,
, which is the probability of default any time up
to and before the end of year t. This leads to the probability of survival through year t expressed as 1
. Assuming a constant survival
probability from a given year to the following one yields the annualized survival probability between time 0 and t, 1
annualized t-year EDF,
is related to the cumulative t-year EDF through the equation
1

. Hence the

The binomial tree in Figure 9 shows the timeline of defaults and the corresponding terms for default probabilities.
Figure 9 Default Risk over Multiple Time Periods

Default t=1

t=0

Default t=2

12

t=1
1

23

Default t=3

12

t=2
1

23

Survival Probability through t=3


1

12

23

Readers familiar with the term structure of interest rates can immediately recognize the close analogy between the terminology used here
and that used for the term structure of interest rates. In Figure 10 we make a direct comparison between the two sets of considerations. In
particular, one can define the forward EDF between
1 and ,
1 and
, , as the probability of default between times
conditional on survival until
1 such that the probability of survival through time t using this conditional information equates
unconditional survival probability implied by
, i.e.,
1
1
So forward EDFs are calculated in a similar fashion to forward interest rates.

Figure 10 Analogy between EDF Term Structure and the Term Structure of Interest Rates
EDF Term Structure
t-year cumulative PD:
t-year survival probability: 1
Annualized t-year EDF:
Forward EDF between
1
,

15

16

1
1 and :

Interest Rate Term Structure


t-year zero price: 0,
t-year hold period return: 1/ 0,
1

See, for example, Zhou (2001)s structural model with jump risk.

JUNE 2012

t-year Spot rate:


1 0,
Forward rate between
1 and :
,
1
,
,

CAPITAL MARKETS RESEARCH

4.1 Building EDF Term Structure Using DD Migration


To build the EDF term structure, we start with the construction of the DD term structure for time horizon t,
,
1,2, , . By
analyzing the maturity structure of a firms long-term liabilities we estimate the default points for horizons longer than one year (see section
3.4) and match those default points with the corresponding expected asset value to produce long time-horizon DDs.
Instead of calibrating a separate DD-to-EDF mapping for each time horizon, the EDF model employs a credit migration based approach to
build the EDF term structure. A firms credit condition is measured by its Distance-to-Default. An explicit characterization of the evolution
of the DD over time, in conjunction of one-year DD-to-EDF mapping, is sufficient to build an EDF term structure. This is because knowledge
of DD evolution allows us to calculate the expected one-year DD level of a future year. We illustrate this in Figure 11. For a firm, whose
current credit condition is characterized by its two-year DD,
,16 we can calculate its expected level of one-year DD one year from now,
, provided we know the probability,
,
, of each possible path its current DD may travel over the next year. Heuristically, we
may then apply the existing one-year DD-to-EDF mapping to the expected second-year DD level to produce the estimate of the second-year
by chaining together its first-year EDF and its expected
EDF.17 Subsequently, one may calculate a firms 2-year cumulative EDF
second-year EDF (or equivalently in this setting, its forward EDF in year two),
1

So the annualized two-year EDF is simply


1

Therefore, a key step in the process is the characterization of DD transition density,


forward EDF credit measure is always positive.

. Note that this procedure ensures that the

Figure 11 DD Migration and the Density of DD Migration

Time
t=0

t=1

The EDF model uses empirical procedures to estimate the DD transition density functions. There are two reasons for this. First, similar to
the distributional properties of one-year horizon DDs, which was discussed in Section 3.4 in regard to one-year EDF estimation, the observed
DD transition frequencies do not fit any typical parametric statistical distributions. In addition, empirical observations suggest that the
shape of transition density function may vary with the firms current credit condition as measured by
. A firm starting from a low DD
(equivalently, from a high default probability) state is more likely to improve than to worsen its credit condition over time, provided it does
not default the probability mass of its transition shifts upward towards higher DD value relative to its initial position. Conversely, a firm
starting from a high DD (equivalently from a low default probability) state will shift the probability mass of its density function downward
16

The subscript denotes the time horizon of DD, while the superscript denotes the time of observation.

17

To be more precise, the one-year DD-to-EDF mapping is applied path-by-path, and then one computes the expectation to produce an estimate of forward
denote the one-year DD-to-EDF mapping. Then the expected one-year EDF at time 1, or equivalently, the forward EDF over
EDF over year two. Let
year 2, can be computed as
,

17

JUNE 2012

CAPITAL MARKETS RESEARCH

relative to its initial position, reflecting a tendency towards deteriorating credit quality. As a result, DD transition density functions
consistent with this observation will produce EDF term structures (in either annualized EDF or forward EDF terms) that are downward sloping
for firms of poor credit quality and upward sloping for firms of good credit quality.
The estimation procedure for DD transition density is similar to the approach used for one year DD-to-EDF mapping. The key difference here
is that we need to estimate a family of density functions, one for each DD level. Take the estimation of DD transition functions over year 1,
,
, as an example. First, we pool all monthly
observations and bucket them into small groups according to their
sizes.
Second, within each
group, we track each observation to find its realized one-year DD one year later. This allows us to build a
frequency distribution of
. We subsequently fit a parametric distribution to the frequency distribution of
. Third, repeating the
group, we end up with a family of transition density functions consistent with historical average behavior of credit
second step for each
migration.

4.2 Extrapolating EDF Term Structure Beyond Five Years


The current EDF model does not estimate DD measures for horizons beyond year five. However, long-horizon default probabilities are
frequently needed in valuing long-term credit instruments. Therefore, we need to develop a method for extrapolating the existing term
structure of EDF metrics beyond year 5. In the current EDF model, it is assumed that after year four the term structure extends with a
constant forward rate, i.e.,
1
1
,
5
,
,
1
The advantages of this approach are twofold. First, it is computationally convenient and is easy to adopt in portfolio analysis. Second, it
results in upward-sloping term structures for high-quality companies and downward-sloping term structures for low-quality companies (with
respect to annualized EDF credit measures). The term structures are also shown to be very close to what one would obtain by modeling oneyear EDF dynamics with a one-factor stochastic model, an approach akin to the one used in one-factor interest rate term structure models
where the interest rate dynamics is driven by the short rate.

5 Public EDF Model Performance


Moodys Analytics has a long tradition of validating (i.e., checking the performance of) the public firm model. We gauge model performance
along three dimensions: the power of the EDF measures, i.e., their ability to rank order default risk; their level calibration, meaning whether
the level of the EDF is consistent with realized default rate levels; and their early warning power.
The validation studies are done on the four principal EDF datasets, i.e., those covering North American corporates, European corporates,
Asian corporates, and global financial institutions. We focus on these broad groupings because we need a large number of defaults in each
study in order to statistically evaluate the models performance and to get reasonable and consistent results. That said, we do occasionally
perform validation studies on more limited datasets, such as those covering single countries.
In this section we present an overview of the validation methodology, as well as the results for North American corporates. Although we do
not report them here, the results are broadly similar for the other regions. All of the region-specific validation studies are available from
Moodys Analytics.18

5.1 Rank Order Power


A key dimension to the success of a credit measure is its ability to prospectively differentiate between non-defaulting entities and defaulting
ones. Intuitively, a powerful credit measure should assign low scores (or high EDFs in the current context) at the beginning of prediction
horizon to entities that default by the end of prediction horizon and, similarly, assign high credit scores (or low EDFs) to those that do not
default within the prediction horizon. This idea is formalized by the notion of the Cumulative Accuracy Profile (CAP) plot and its summary
statistic, the Accuracy Ratio (AR). Appendix B describes the construction of the CAP plot and the calculation of AR. By construction, the AR
is restricted between 0 and 1. It can be interpreted in a fashion analogous to the traditional correlation statistic a higher AR implies that
the rank order provided by the credit metric is more highly correlated with subsequent default experience. Thus, one should not interpret
the value of AR as merely the percentage of defaults correctly predicted (i.e., the hit rate).
The CAP curves for one-year EDFs for large North American corporates are plotted in Figure 12, along with the CAP curves for their Z-scores
(the Z-score is a classical statistical credit scoring model first developed by Prof. Edward Altman19). The EDFs are rank ordered on the X-axis,
starting with the highest EDFs on the left and moving to the lowest on the right. We pool all firm-month observations of a given sample
18

As of this writing, the most recent validation studies include Crossen, Qu, and Zhang (2011), Crossen and Zhang (2011a), Crossen and Zhang (2011b), and
Crossen and Zhang (2011c).

19

Altman (1968) pioneered the statistical approach to default prediction. In this modeling approach, one relates likelihood of default to a set of, often
arbitrary, explanatory variables. Selection of the explanatory variables varies with sample of study, often including some combinations of accounting,
macroeconomic and market-based variables.

18

JUNE 2012

CAPITAL MARKETS RESEARCH

before ranking them according to their one-year EDF values, so that the same EDF value at different time points is assigned the same risk
ranking. We track each EDF observation for 12 months and see if the firm has defaulted by end of the prediction horizon. Crucially, we use
the firms EDFs at the beginning of the measurement period. In the left panel, for example, 70% of the defaulters (measured on the Y axis)
were captured in the top 10% of the EDFs, by level. A credit metric with higher rank ordering power should have its CAP curve bowed farther
toward the upper left corner of the graph and consequently have a higher AR than a less powerful one. That is, in the foregoing example,
80% or 90% of the defaulters would be in the top 10% of the sample.
We show the results of the performance test of EDF for two sub-periods, from 2001 to 2007 (on the left panel) and from 2008 to 2010 (on
the right panel). We also compute Z-Scores for these two periods.20 A comparison of ARs between the sub-periods, 84% in the earlier one
vs. 82% in the later one, shows that EDF measures have performed consistently over different stages of the credit cycle, even in the severe
downturn captured in the 2008-2010 data. The EDF credit measure also performs substantially better than the Z-Score during both
timeframes.21
Figure 12 CAP Curves for Large North American Corporates Firms

5.2 Level Validation


Figure 13 measures whether EDF metrics levels at the beginning of a measurement period are consistent with subsequent default rates.
Note that this differs from a rank order test a default risk systems rank ordering ability can be strong, but the absolute levels of the
measure could still be too high or too low. A good credit risk measure should exhibit both strong rank order power as well as statistically
consistent level calibration.
In the level calibration test we compare realized default rates to the average EDF levels of twenty EDF level buckets. That is, at the start of a
one-year time period we group firms by their EDF levels into twenty equally sized buckets (determined by the percentile ranking of their EDF
level). For each bucket, we calculate the average EDF; then we calculate the actual default rate for each bucket over the next year and
compare the results. In Figure 13 the first bucket represents the 5% of firms with the lowest EDF levels, while the 20 bucket represents the
5% of firms with the highest EDF levels. In calculating these numbers, the results of many months measurement periods are averaged.
th

Figure 13 shows that there is a high degree of correlation between the average level of EDF measures and the associated realized default
rates. We also observe that, over this long time period, average EDF levels are higher than realized default rates across all twenty EDF level
buckets. This result is by design. EDF measures have been calibrated to be somewhat conservative relative to the historical data because we
cannot observe all defaults globally, so the realized default rate measured in Figure 13 is likely the lower bound of the true, unobservable
default rate.

19

20

For both credit metrics, the test samples are restricted to firms with annual sales greater than $30 million.

21

Although AR scores are sample dependent, the results are so consistent that they are unlikely to be the result of differences in portfolio composition.

JUNE 2012

CAPITAL MARKETS RESEARCH

Figure 13 Comparison of EDF Levels with Realized Default Rates, Large North American Corporates, 2001-2010
Mean EDF

Default Rate

EDFs (logscale, %)

0.1

0.01

0.001

0.0001
1

8 9 10 11 12 13 14 15 16 17 18 19 20
EDF rank order buckets

5.3 Early Warning Power


EDF measures are statistical predictions of default over some specified time horizon, such as one year. If they are performing as designed,
they should, therefore, provide a warning signal consistent with this ex ante prediction horizon, if not better. We measure the early warning
power of EDF measures by comparing the changes in the median EDF of a sample of known defaulters over time to the changes in the
distribution of EDF measures for the entire population over the same time period. We see the results in Figure 14. Here, we compare
changes in the median EDF for North American firms that defaulted between 2008 and 2010 with the changes in the 25 , 50 (median), and
75 percentiles of the EDFs for all North American firms. In 2001 the firms that ultimately defaulted seven plus years later tracked the
median for the entire sample. As time progressed, the distribution of EDF measures for North American firms began to shift down, reflecting
the markets views that credit risk was declining. Over the same time period, credit spreads became progressively narrower, also signaling
lower risk. However, the median EDF for the sample of defaulting firms moved sideways over the 2003-2006 time period. Although the
level of risk for this sub-set of firms did not move higher, on a relative basis the sub-set of defaulters underperformed the overall population.
In 2007, the EDF levels for the defaulting firms began to rise sharply, and reached the maximum 35% level by the end of 2007, well before
the financial crisis reached a fever pitch in 2008, and well before the actual defaults took place.
th

th

Figure 14 EDF Evolution of North American Corporates vs. Companies Defaulted 2008-2010
All Companies

Failed Companies 2008-2010

100
Median EDF

EDF Measure (%, log scale)

10

75 %

1
50 %

0.1

25 %

0.01
Dec01

20

JUNE 2012

Dec02

Dec03

Dec04

Dec05

Dec06

Dec07

Dec08

Dec09

Dec10

th

CAPITAL MARKETS RESEARCH

6 Conclusion
Moodys Analytics public firm EDF model produces default probability estimates on a daily basis for over 30,000 public firms globally. The
model is built on the causal relationship between default and economic drivers of defaults, namely, asset value, asset volatility and default
point. Despite its quantitative nature, the model, at its core, relies on economic intuitions shared by traditional credit analysis credit risk is
a function of both business risk and financial risk. The EDF model delivers a powerful and reliable credit metric by 1) incorporating realistic
assumptions into the underlying theoretical model, 2) developing an elaborate numerical routine to estimate quantities not directly
observable, and 3) producing EDF estimates consistent with real-world experiences on the basis of a comprehensive default collection. The
performance of EDF measures is regularly evaluated from three dimensions: rank order power, level accuracy and their early warning
effectiveness. On all three accounts the EDF measure has performed well, and it has been consistent at different stages of economic cycles,
showing its robustness. In addition to one-year EDFs, a frequently used credit measure among practitioners, the EDF model also generates
term structures of EDFs for maturities up to ten years. Since its inception in early 1990s, the EDF model has established itself as a leading
industry benchmark for quantitative default risk assessment. Moodys Analytics continues to develop PD metrics in the EDF model platform,
as described in Appendix A, to complement the traditional EDF measures and to meet users increasing demand for risk metrics.

21

JUNE 2012

CAPITAL MARKETS RESEARCH

References
Altman, E. I. (1968). Financial ratios, discriminant analysis and the prediction of corporate bankruptcy. Journal of Finance, 23, 589609.
Black, F, and M. Scholes (1973). The Pricing of Options and Corporate Liabilities. Journal of Political Economy, 81, 637659.
Bohn, J. R., and R. M. Stein (2009). Active Credit Portfolio Management in Practice, Hoboken, NJ: John Wiley & Sons, Inc.
Bollerslev, T. (1986). Generalized Autoregressive Conditional Heteroskedasticity. Journal of Econometrics, 31, 307-327.
Bollerslev, T., R. Y. Chou, and K. F. Kroner (1992). ARCH Modeling in Finance: a Review of the Theory and Empirical Evidence.
Journal of Econometrics, 52, 5-59.
Crosbie, P. J. and J. R. Bohn (2003). Modeling Default Risk. Moodys-KMV working paper, December.
Crossen, C., S. Qu, and X. Zhang (2011). Validating the Public EDF Model for North American Corporate Firms. Moodys Analytics
Modeling Methodology.
Crossen, C., and X. Zhang (2011a). Validating the Public EDF Model for European Corporate Firms. Moodys Analytics Modeling
Methodology.
Crossen, C., and X. Zhang (2011b). Validating the Public EDF Model for Asian-Pacific Corporate Firms. Moodys Analytics Modeling
Methodology.
Crossen, C., and X. Zhang (2011c). Validating the Public EDF Model for Global Financial Firms. Moodys Analytics Modeling
Methodology.
Dwyer, D. and S. Qu (2007). EDF 8.0 Model Enhancements. Moodys KMV White Paper.
Dwyer, D., Z. Li, S. Qu, H. Russell, and J. Zhang (2010). Spread-implied EDF Credit Measures and Fair-value Spreads. Moodys
Analytics Modeling Methodology.
Ericsson, J. and J. Reneby (2005). Estimating Structural Bond Pricing Models. The Journal of Business, 78, 707735.
Ferry, D., Hughes, T., and M. Ding (2012). Stressed EDF Credit Measures. Moodys Analytics Modeling Methodology.
Hamilton, D.T., Z. Sun, and M. Ding (2011). Through-the-Cycle EDF Credit Measures. Moodys Analytics Modeling Methodology.
Hull, J., M. Predescu, and A. white (2005). Bond Prices, Default Probabilities, and Risk Premiums, Journal of Credit Risk, Spring, 5360.
Korablev, I., U. Makarov, X. Wang, A. Zhang, Y. Zhao, and D. Dwyer (2012). Moodys Analytics RiskCalc 4.0 U.S.. Moodys Analytics
Modeling Methodology.
Merton, R. C. (1973). Theory of Rational Optional Pricing. Bell Journal of Economics 4, 141183.
Merton, R.C. (1974). On the Pricing of Corporate Debt: The Risk Structure of Interest Rates, Journal of Finance, 29 (May), 449-470.
Sun, Z. (2010). An Empirical Examination of the Power of Equity Returns vs. EDFs for Corporate Default Prediction. Moodys
Analytics White Paper.
Zhou, C. (2001). The Term Structure of Credit Spreads with Jump Risk. Journal of Banking and Finance, 25, 2015-2040.

22

JUNE 2012

CAPITAL MARKETS RESEARCH

Appendix A Extensions to the Public EDF Model


The last decade has witnessed the rapid growth of the credit markets, the 2008 financial crisis and subsequent US recession, and now the
European sovereign crisis. The credit markets have played a central role in this saga first in the proliferation of structured products
accompanied by lending standards that, with the benefit of hindsight, proved to be lax, then in the loss of confidence in the creditworthiness
of financial institutions and subsequently in some sovereign entities. As the role of risk management expands either due to financial
institutions internal demands or to regulatory requirements we have seen an increase in demand for credit risk metrics that meet
particular risk management needs. A complete risk management system requires a PD measure relevant for the task, whether it is early
warning, calculating regulatory capital, pricing, or stress testing.
Against this background Moodys Analytics has developed three extensions of the traditional EDF measures: CDS-implied EDFs (CDS-I EDFs),
Through-the-Cycle EDFs (TTC EDFs), and Stressed EDFs (S-EDFs). These extended EDF measures complement the tradition EDF metric
through broadening coverage to include unlisted entities (CDS-I EDFs), removing cyclical variations from the traditional EDF measures (TTC
EDFs), or providing PD measures conditional on stressed economic scenarios (Stressed EDFs). Their modeling approaches take advantage of
the explicit economic causality embedded in the structural credit risk models. In particular, they all hinge on distance to default as a key
modeling object, so at a fundamental level they rely on a common default dynamic, and are internally consistent. Specifically, the CDS-I
EDF model recognizes that DDs, as well as CDS spreads, contain a component that compensates risk-averse investors for bearing risk. This
component needs to be isolated and estimated if one wants to derive an EDF-equivalent from CDS spreads. TTC EDF measures entail the
construction of a filter to remove the cyclical component from a firms EDF. Finally, Stressed EDFs are derived by modeling the sensitivities
of firms EDFs to various macroeconomic variables. Below we provide a brief description of methodology used in each credit measure.
Readers are referred to specific methodology papers for more in-depth treatments.22

A.1 Credit Default Swap Implied (CDS-I) EDF Measures


Credit default swaps (CDS) are financial agreements by which the buyer of protection against default receives a payment from the seller of
protection in a specified amount, to be paid if the reference entity defaults before the maturity date of the agreement. In exchange, the
protection buyer make a series of payments, called CDS premiums, until the maturity of the agreement or the occurrence of a default event,
whichever comes earlier. Starting around 2001 the CDS market experienced rapid growth in both coverage and trading volumes. Since a
CDS contract is essentially an insurance policy against the default of an entity, it is often considered a pure play on credit risk, free of
interest rate or liquidity considerations that often impact bond yields and credit spreads.
CDS spreads and EDF metrics are credit signals conveyed by two distinct, but related markets. EDFs are derived from equity market
information, in conjunction with balance sheet information.23 Their movements correspond to a large degree to market valuations of equity,
which are not necessarily credit related. Despite their focus on credit loss, CDS spreads are not pure measures of default probabilities. For
one thing, the price of default protection depends not only on the likelihood of default, but also the size of the loss if default occurs. Second,
and probably more importantly, the price of insurance also depends on the buyers degree of risk aversion the more risk averse she is, the
more she is willing to pay to avoid the same expected loss. Investors risk attitudes are reflected in the market risk premium the excess
return demanded by risk-averse investors for bearing risk. For the same amount of expected credit loss, the risk premium can differ across
regions and vary over time. For example, during financial crises when capital is scarce, investors tend to exhibit a higher level of risk aversion,
so they are willing to pay more for the same protection against default. To put it differently, the seller requires more compensation for
taking on the same expected credit loss than during a benign economic environment.24 We can loosely transform this economic intuition
into the following algebraic equation,

So to extract default probability information from a CDS spread, one has to remove the confounding factors of loss given default (LGD) and
the risk premia that arise from risk aversion.25
The CDS and equity markets provide, on average, consistent assessment of default probabilities. This assumption allows us to calibrate the
LGD and Market Price of Risk that equates the two markets, using the subsample of firms with both CDS spreads and public EDF measures.
Of the two calibration parameters, the estimation of the MPR is the more important step for the calculation of CDS-I EDFs.
22

See Dwyer et al (2010) for the CDS-I EDF methodology, Hamilton, Sun, and Ding (2011) for the TTC EDF methodology, and Ferry, Hughes, and Ding (2012)
for the S-EDF methodology.

23

As we argued earlier, asset value is the key variable in the public firm EDF model. A firms EDF is derived from its equity value to the extent that asset values
are not directly observable and have to be estimated indirectly through its relation with equity values.

24

Strictly speaking, the excess CDS premium under stressed economic condition can be attributed to elevated default correlation default risk is harder to
diversify during financial crisis than during regular time. Irrespective of the drivers of the CDS premium in excess of actuarial loss investors risk attitudes or
the difficulty inherent in risk diversification this part of CDS spreads needs to be removed to isolate the physical probability of default.

25

A common practice among practitioners in estimating PD from CDS spreads is to divide CDS spreads by estimated loss given default. This leads to the socalled risk-neutral PD. In essence, it ignores the market risk premium by assuming investors are risk neutral. This risk-neutral PD exaggerates the actual
probability of default by an average order of three times (Hull, Predescu, and White (2005))

23

JUNE 2012

CAPITAL MARKETS RESEARCH

To provide a heuristic description of how to isolate and estimate the MPR, we consider a hypothetical risk-neutral world in which all
investors do not require extra compensation regardless of risk they bear. A consequence of risk-neutrality is that all financial assets earn the
risk-free rate, r. In particular, a firms total asset value will grow at the rate of r instead of the higher rate of , the real world growth rate
posited by equation (1) on p. 8. Figure 15 illustrates the growth rates of a firms asset value in the two distinct worlds and their
corresponding DDs, while all other default drivers remain the same.26 The DD of the risk neutral world,
, is smaller than the DD of the
, because of asset values different growth rates in these two worlds. As a consequence, the risk-neutral PD,
, the area
real world,
under the green curve and below the default point is larger than the real world PD,
, the corresponding area under the orange curve. The
difference between the two DDs (and the two PDs) is intuitively related to the degree of investor risk aversion the more risk averse
investors are, the larger the excess return (i.e. the amount of over r) demanded by investors, and the larger the difference between the two
DD (and the two PDs). This is the key economic intuition that underlies the following algebraic formulation:
,
where the market price of risk,
, is a measure of investor risk aversion, and measures the amount of risk associated with the firms
assets in terms of its co-movement with the market. The assumption of normally distributed DD enables us to connect the risk-neutral PD
with the physical PD in a simple analytical expression

If we simply divide an entitys observed CDS spreads by its LGD, the resulting figure is just an estimate of its risk-neutral PD the calculation
implicitly assumes a zero risk premium and such risk-neutral PDs bias the real-world PDs upward, as shown in the above analysis. If we can
also compute the entitys real-world PD which is what the EDF model does then we can isolate and estimate the parameter of
.
Figure 15 Real-World DD vs. Risk-Neutral DD

A0

ln

ln

Time

A plot of MPRs of investment-grade names for various regions in Figure 16 shows a time-varying pattern that is consistent with economic
intuition it peaked around the financial crisis, in agreement with the observation of flight to safety during a time of crisis.
26

24

In particular, asset volatility of the firm is not changed by adjustments made to the firms asset value drift, as per the Girsanov Theorem.

JUNE 2012

CAPITAL MARKETS RESEARCH

Figure 16 Market Price of Risk of Different Regions


North America

Europe

Asia & South America

Japan

1.4
1.2
1
0.8
0.6
0.4
0.2
0
-0.2
-0.4
2002

2003

2004

2005

2006

2007

2008

2009

2010

2011

2012

CDS-implied EDFs were launched in the summer of 2010, covering over 2,500 entities on daily basis with CDS contracts outstanding and
with reported CDS spreads. In addition to providing an alternative PD metric for firms with traditional EDFs, CDS-I EDFs extend the coverage
of traditional EDFs from publicly listed firms to an important sector of market entities that have CDS contracts outstanding but which have
no publicly traded equity, such as sovereigns, municipalities, and subsidiaries of large firms. Figure 17 shows the CDS-I EDF for a selected set
of European sovereigns.
Figure 17 One-Year CDS-Implied EDF of Selected European Sovereigns
Portugal

Ireland

Spain

Italy

9%
8%
7%
6%
5%
4%
3%
2%
1%
0%
Jan10

May10

Sep10

Jan11

May11

Sep11

Jan12

May12

A.2 Through-the-Cycle EDF (TTC EDF) Measures


Credit metrics are often distinguished by their degree of point-in-time (PIT) versus through-the-cycle (TTC) orientation. The terms, first
coined by rating agencies, are widely accepted by both regulators and risk management practitioners, although their exact meanings are
often debated. A primary example of PIT measures is the traditional EDF, while agency ratings are frequently viewed as a representative of
through-the-cycle measures. A common understanding of the difference between the two types of credit metrics lies in their time-series
behaviors. A through-the-cycle credit measure is intended to capture a debt issuers long-run credit risk, and moves slowly with the
25

JUNE 2012

CAPITAL MARKETS RESEARCH

expansion and contraction of economic cycles. A point-in-time credit measure is intended to capture an obligors creditworthiness
conditioned on all credit-relevant information, and hence moves in response to both enduring and transient credit shocks.
The distinction between point-in-time and through-the-cycle credit risk measures is both important and useful. At a fundamental level, the
distinction between the two metrics should be motivated by sound economic reasoning, so the choice of either measure is more than a
matter of pure convenience. At an operational level, their distinct time series properties make them suitable for different applications.
Specifically, PIT probabilities of default are useful for applications in which the early detection of changes in credit risk at both the single
name and portfolio level is important, whereas TTC probabilities of default are suitable to applications in which the costs of adjustment
which may arise due to transactions costs, regulatory compliance costs, change of contractual terms, etc. exceed the expected costs of
negative credit events (such as default).
Despite the widespread usage of the terms PIT and TTC among academics, practitioners and regulators, there is no consensus about their
definitions. Hence, there has been a lack of a framework that links the two types of measures in an economically sensible and practically
parsimonious fashion. To help fulfill this need, Moodys Analytics has developed Though-the-Cycle Expected Default Frequency (TTC EDF)
credit measures. TTC EDF metrics are one-year probabilities of default that are largely free of the effect of the credit cycle. The metrics were
launched in May 2011 and are provided on a daily basis for all firms with traditional EDFs.. TTC EDFs are derived from Moodys Analytics
public firm EDF model. The TTC EDF model is essentially another model layer that transforms traditional EDF metrics into PDs that exhibit
the characteristics typically associated with through-the-cycle risk measures: stability over the cycle, smoothness of change, significantly
reduced procyclicality, and a long-run risk orientation.
Underlying TTC EDFs is a properly defined and measured concept of the cycle. Specifically, we define the cycle as the credit cycle (as
opposed to the business cycle). We measure this by looking at EDFs across time. As shown in Figure 18, the average EDF exhibits a strong
periodic behavior, with peaks corresponding to times of financial stress and troughs corresponding to periods of credit expansion. Very often
the peak in the average EDF coincide with times of economic recessions, as identified by the National Bureau of Economic Research in the
US. Additionally, by investigating the spectral density of average EDFs, we identify a duration of roughly 9 years for such defined credit
cycles, common to all major geographical regions. This means that if a firms credit quality, as measured by its EDF, is largely driven by
general credit conditions, then a through-the-cycle PD measure should be reasonably stable over the cycle. The methodology of TTC EDFs is
based on this empirical observation.
Figure 18 Average US EDF Measures and Change in US Industrial Production, 1969-2010
NBER-Dated Recession

Mean US EDF

US Industrial Production

EDF (%)

Annual Change in US IP

12

-15%

11
-10%

10
9

-5%

8
7

0%

6
5%

5
4

10%

3
2

15%

1
0
1969

20%
1972

1975

1978

1981

1984 1988

1991

1994

1997 2000 2003 2007 2010

The key distinction between PIT and TTC measures is their information content: a PIT measure incorporates all relevant information about a
firms creditworthiness, whereas a TTC measure moves primarily in response to a firms underlying, long-run credit quality trend, tuning out
changes attributed to cyclical variations in the aggregate market. Because shocks to macro-credit conditions occur at a higher frequency
than those to firms own long-run credit trends, this informational difference allows us to impart a frequency decomposition view on the two
types of credit measures. That is, a PIT measure contains both high-frequency and low-frequency credit risk signals, while a TTC measure
that contains only the low-frequency components of the PIT measure.
The core technique of the TTC EDF methodology involves the calibration of a filter which extracts the low-frequency component pertaining
to firms long run credit trend, from the firms traditional EDF. Figure 19 illustrates the filtering scheme. The filter operates on DD instead of
EDF directly. The filtered DD possesses two desirable properties: it preserves the long-run average credit quality conveyed by the traditional
EDF, and it significantly reduces the amplitude of movement in the traditional EDF the TTC EDF is much more stable through the cycle
than the original EDF. Additionally, the parameters of the filter are calibrated to match the empirical length of credit cycles described above.

26

JUNE 2012

CAPITAL MARKETS RESEARCH

Figure 20 shows the times series of TTC EDFs, along with the original EDFs, for a couple of example firms.
Figure 19 Transformation of the original DD to the Through-the-Cycle DD

Figure 20 Through-the-EDFs versus original EDFs: Selected Examples

It is widely accepted that reducing the variability of a credit measure often comes at the cost of losing some of its forward-looking default
discriminating power, as measured by its Accuracy Ratio. Relative to other available approaches to smoothing PIT credit measures, the TTC
EDF achieves a much favorable tradeoff between the two it compresses the range of movement in traditional EDF substantially while losing
a minimal amount of default prediction power. The stability of TTC EDFs is particularly desirable in risk management applications, where
adjusting credit exposures (or levels of economic capital) is very costly.

A.3 Stressed EDF Measures


Stressed EDF measures were developed to meet the demand for What-If analysis that arises in internal risk functions, as well as regulatory
stress testing. They are one-year default probabilities (PDs) conditioned on holistic economic scenarios developed in a large-scale, structural
macroeconometric model framework. They were launched in May 2012, initially for firms domiciled in North America and cover roughly
85% of names with existing EDFs in the same region. As of this writing, the metric is being developed for other regions, such as Europe,
Japan and Australia. Stressed EDFs take advantage of the causal relationship embedded in the public firm EDF model to build an economic
transmission mechanism between macroeconomic variables and firm level credit risk. Based on the historical relationship between the two,
one can estimate future levels of EDFs conditional on hypothetical economic forecast scenarios. Firm-level credit conditions are
characterized by an entitys DD, which again serves as the modeling object.
The Stressed EDF methodology can be divided into two distinct phases an estimation phase and a projection phase. The estimation phase
establishes the historical relationship between DDs and the macro drivers, and is comprised of two sub-models. In the first sub-model we
estimate the relationship between macroeconomic drivers and individual DDs at the firm level. This allows us to assess the sensitivity of
firm-level DDs to the macro variables, with the added assumption that this sensitivity varies by industry and credit quality. Its result enables

27

JUNE 2012

CAPITAL MARKETS RESEARCH

us to determine each firms relative position in a projected DD distribution corresponding to a given economic scenario at a future point in
time. The second sub-model estimates the relationship between the macro drivers and the distributional properties of DDs. With this we
can characterize the distribution of DDs for a given economic scenario, consistent with historical experience the mass of the DD
distribution shifts from high-DD (equivalently, low-EDF) to low-DD (high-EDF) as the economy moves from expansion to recession. We
then combine the results from these two sub-models in the second phase to project the level of a firms DD for a given economic scenario at
a future point in time. Crudely speaking, the Stressed EDF methodology creates micro level images (in PD terms) of a given future economic
scenario following historical experiences.
The economic scenarios, the key input to Stressed EDF measures, are generated by the economic forecasting models of Moodys Analytics
Economic & Consumer Credit Analytics (ECCA) division. The models themselves are structural macroeconometric specifications that allow
various aspects of the economy to evolve interdependently in a multivariate error-correction framework, where short-run deviations from
the systems dynamic equilibrium are modeled together with the long-run determinants of growth. The scenario forecasts can be viewed in
the context of a distribution of economic outcomes that are consistent with a particular narrative of risks facing the economy. The baseline
forecast represents ECCAs prediction of the most likely outcome given current conditions. The alternative scenarios are first sketched out
around a simulation-based probability distribution of economic outcomes and then filled in formally through the ECCA macroeconomic
model framework.
Figure 21 shows the GDP growth rates of the five economic scenarios and the resulting median Stressed EDFs corresponding to each
scenario. Given that GDP growth is one of the dominant economic factors, the rank order among different scenarios, as well as the trend for
each given scenario, of median Stressed EDF largely reflects that of the GDP growth. In Figure 22 we show a couple of Stressed EDF
examples. They also display dynamics consistent with the underlying economic scenarios.
Figure 21 GDP Growth Rates and Median Stressed EDFs of Five Economic Scenarios

Figure 22 Stressed EDF Credit Measures: Selected Examples

28

JUNE 2012

CAPITAL MARKETS RESEARCH

Stressed EDF measures have a wide range of applications, ranging from single-name credit risk management to credit exposure assessment
at the portfolio level, whenever a what-if analysis is called for. Relative to the standard Value-at-Risk approach, where extreme values are
generated by statistical assumptions, Stressed EDF measures hold a major advantage they are generated by economic models with causal
links embedded in the system, and the underlying economic scenarios are produced by structural econometric models of richer dynamics
than what is allowed for by ad hoc statistical assumptions. In other words, the outputs of the Stressed EDF model are highly plausible, even
under the extremely adverse economic scenario, and highly contextual given historical experiences and current conditions.

Appendix B Measuring Rank Order Power Using the Cumulative Accuracy Profile and Accuracy
Ratio
To evaluate a credit metrics rank order performance we begin with a simple question: what is the percentage of defaults the credit metric
predicts correctly using a certain cutoff level of the risk measure, for a given time horizon? Intuitively, the higher the percentage, the better
the credit risk metric is presumed to be. However, this calculation by itself is not sufficient for measuring the predictive power of a credit
metric: one cares not only the percentage of defaults correctly predicted (i.e., the true positives), but also the percentage of non-defaults
correctly predicted (the true negatives), as well as the percentages of false positives (Type I errors) and false negatives (Type II errors). A
contingency table, as shown in Figure 23, describes the success and failure rates of both default and non-default predictions. Increasing the
rank ordering power of a credit metric amounts to increasing the number of correct prediction (the prime diagonal boxes in the table) and
decreasing the number of incorrect predictions (the off-diagonal boxes). And a performance measure of credit metrics should take into
consideration of both success and failure rates.
Figure 23 Contingency Table for a Given Level of Credit Cutoff
Actual Outcome

Predicted by
Credit Measure

Not Default

Default

Low Risk

Correct Prediction

Type II Error

High Risk

Type I Error

Correct Prediction

Since the scores of a credit model span the spectrum of credit quality, we are concerned not only with its performance at a single cutoff
level, but also its performance at every level. Intuitively speaking, we want to create a series of contingency tables, one for each level of the
credit score. To achieve this goal, we plot the Cumulative Accuracy Profile (CAP) curve, as shown in Figure 24, with each point representing
the information of a specific contingency table the x-coordinate measures the cumulative percentage of all entities below a certain credit
level, i.e., those labeled as high risk, and y-coordinate measures the cumulative percentage of actual defaulters that are labeled as high risk.
In the context of the EDF model, a point of its CAP curve shows the y% of defaulters captured by the x% of all entities with highest EDFs.
Since the entities on both the X and Y axes are arranged in the order of decreasing risk, a powerful credit metric should have a CAP curve
bowed toward the top left corner of the graph. That is to say, the population with worst credit scores (those with the highest EDFs) a small
slice of the population would capture the majority of defaults. It is worth noting that the CAP plot will be pulled downward if a large
percentage of defaults occur to firms deemed low risk (Type II errors), or be pulled toward right if a large percentage of firms deemed high
risk do not default (Type I errors). The Accuracy Ratio summarizes the information in the CAP plot essentially all the information the
contingency table for every cutoff level in an intuitive summary statistic.
By construction, a model that assigns credit scores on a purely random basis will have a CAP curve coinciding with the 45 degree line and a
zero AR; a perfect credit model will have its CAP curve shaped like a right triangle and an AR of 100%. Of two competing credit metrics , the
one with higher AR enjoys a better overall performance in terms of rank order power. However, it is possible that the CAP curve of the
superior credit metric judged by its AR crosses the CAP curve of the inferior one, meaning that the former predicts less number of
defaults than the latter for a certain segment of the risk spectrum.

29

JUNE 2012

CAPITAL MARKETS RESEARCH

Figure 24 Cumulative Accuracy Profile (CAP) Plots for Two Hypothetical Risk Scoring Systems

30

JUNE 2012

CAPITAL MARKETS RESEARCH

Report Number: 143179

Authors
Zhao Sun
David Munves
David T. Hamilton

Editor
Dana Gordon

1.212.553.4666
zhao.sun@moodys.com
1.212.553.2844
david.munves@moodys.com

Contact Us
Americas :
Europe:
Asia:

1.212.553.4399
+44 (0) 20.7772.5588
813.5408.4131

1.212.553.1695
david.hamilton@moodys.com

1.212.553.0398
dana.gordon@moodys.com

Copyright 2012, Moodys Analytics, Inc., and/or its licensors and affiliates (together, "MOODY'S). All rights reserved. ALL INFORMATION CONTAINED HEREIN IS
PROTECTED BY COPYRIGHT LAW AND NONE OF SUCH INFORMATION MAY BE COPIED OR OTHERWISE REPRODUCED, REPACKAGED, FURTHER TRANSMITTED,
TRANSFERRED, DISSEMINATED, REDISTRIBUTED OR RESOLD, OR STORED FOR SUBSEQUENT USE FOR ANY SUCH PURPOSE, IN WHOLE OR IN PART, IN ANY FORM OR
MANNER OR BY ANY MEANS WHATSOEVER, BY ANY PERSON WITHOUT MOODYS PRIOR WRITTEN CONSENT. All information contained herein is obtained by MOODYS
from sources believed by it to be accurate and reliable. Because of the possibility of human or mechanical error as well as other factors, however, such information is provided
as is without warranty of any kind and MOODYS, in particular, makes no representation or warranty, express or implied, as to the accuracy, timeliness, completeness,
merchantability or fitness for any particular purpose of any such information. Under no circumstances shall MOODYS have any liability to any person or entity for (a) any loss
or damage in whole or in part caused by, resulting from, or relating to, any error (negligent or otherwise) or other circumstance or contingency within or outside the control of
MOODYS or any of its directors, officers, employees or agents in connection with the procurement, collection, compilation, analysis, interpretation, communication,
publication or delivery of any such information, or (b) any direct, indirect, special, consequential, compensatory or incidental damages whatsoever (including without
limitation, lost profits), even if MOODYS is advised in advance of the possibility of such damages, resulting from the use of or inability to use, any such information. The credit
ratings and financial reporting analysis observations, if any, constituting part of the information contained herein are, and must be construed solely as, statements of opinion
and not statements of fact or recommendations to purchase, sell or hold any securities. NO WARRANTY, EXPRESS OR IMPLIED, AS TO THE ACCURACY, TIMELINESS,
COMPLETENESS, MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OF ANY SUCH RATING OR OTHER OPINION OR INFORMATION IS GIVEN OR MADE
BY MOODYS IN ANY FORM OR MANNER WHATSOEVER. Each rating or other opinion must be weighed solely as one factor in any investment decision made by or on behalf
of any user of the information contained herein, and each such user must accordingly make its own study and evaluation of each security and of each issuer and guarantor of,
and each provider of credit support for, each security that it may consider purchasing, holding or selling. MOODYS hereby discloses that most issuers of debt securities
(including corporate and municipal bonds, debentures, notes and commercial paper) and preferred stock rated by MOODYS have, prior to assignment of any rating, agreed to
pay to MOODYS for appraisal and rating services rendered by it fees ranging from $1,500 to approximately $2,400,000. Moodys Corporation (MCO) and its wholly-owned
credit rating agency subsidiary, Moodys Investors Service (MIS), also maintain policies and procedures to address the independence of MISs ratings and rating processes.
Information regarding certain affiliations that may exist between directors of MCO and rated entities, and between entities who hold ratings from MIS and have also publicly
reported to the SEC an ownership interest in MCO of more than 5%, is posted annually on Moodys website at www.moodys.com under the heading Shareholder Relations
Corporate Governance Director and Shareholder Affiliation Policy.
Credit Monitor, CreditEdge, CreditEdge Plus, CreditMark, DealAnalyzer, EDFCalc, Private Firm Model, Portfolio Preprocessor, GCorr, the Moodys logo, the Moodys KMV logo,
Moodys Financial Analyst, Moodys KMV LossCalc, Moodys KMV Portfolio Manager, Moodys Risk Advisor, Moodys KMV RiskCalc, RiskAnalyst, RiskFrontier, Expected Default
Frequency, and EDF are trademarks or registered trademarks owned by MIS Quality Management Corp. and used under license by Moodys Analytics, Inc.

31

JUNE 2012

Vous aimerez peut-être aussi