Vous êtes sur la page 1sur 131

NON-LINEAR RISK MEASUREMENT

Jeremy ODonnell

Thesis submitted to the University of London


for the degree of Master of Philosophy

IMPERIAL COLLEGE

April 2003
To my wife, for constant encouragement, my daughters, for constant
distractions, and my son, for waiting until I had finished.

Page 2
A CKNOWLEDGEMENTS

I would like to thank Professor Nigel Meade for working with me over the many years it
has taken to complete this research on a part-time basis. He has been an unwavering
supporter of my subject area and a huge help in turning my raw ideas into a structured,
sensible presentation. He has been aided in the background by Mr. Robin Hewins, who has
been a reference for banking industry material.

I would like also to thank Dr. Richard Flavell, an early inspiration, who suggested that my
research include coverage of the regulatory environment. Professor Stewart Hodges, of the
Warwick Business School Financial Options Research Centre, has always expressed an
interest in the research and encouraged me in the early days.

The other staff of the Management School have always been ready with advice and
assistance, with special mention to Professor Sue Birley, for advice on the scope of my
research.

Both Kris Wulteputte and Jacques Longerstaey provided assistance while they were
supporting the RiskMetrics methodology, at the RiskMetrics Group and at JP Morgan.
Kris in particular gave me useful feedback on the core experimental work.

I would like to express sincere gratitude to both Gulf International Bank and Merrill Lynch
for their financial support. Don Simpson, Graham Yellowley and Jay Morreale indulged
my ambitions without asking for anything in return. Dominic Ash of Merrill Lynch
provided careful proofreading of the final thesis draft.

Numerical Technologies of Tokyo provided the Monte Carlo simulation add-in for Excel
that I use in Chapter 5. This seemed to work much better than my own efforts and saved
me a lot of heartache in the final months of my results gathering.

Finally, I would like to thank my family, for making me believe that I could do this.

Page 3
ABSTRACT

Several tools exist to assist the modern risk manager in monitoring investments, ranging
from institution-wide position reports, through market sensitivity analysis and credit
exposure reports, to complex money-at-risk calculations and simulations. The risk manager
must choose one or more methodologies to combine this data to provide meaningful
measures of the risk. One particularly difficult area has been the risk that arises from
positions in optio ns, in circumstances when the risk manager does not want to
compromise on speed, accuracy or cost of implementation. This will happen when the
institution has significant option positions, requires a measure with credibility, and has
limited resources for systems implementations. In this thesis, I look at the popular
methodology available to risk managers, RiskMetrics, and assess the level of compromise
that the risk manager must make when using the fast techniques within this methodology
to measure non-linear risk from option portfolios. The thesis describes the common
shortfalls in the RiskMetrics model when applied to a typical portfolio for a financial
institution. The thesis goes on to examine the challenges of non-linear risk measurement in
more detail. In particular, I examine the measurement of non-linear risk using the
RiskMetrics Delta-Gamma-Johnson modelling method. Very few publications in the field
of non-linear Value at Risk (VaR) have included a review of the Johnson methodology. I
show that it can work effectively for non-linear risk in interest rate options, under some
circumstances. I also show an example for which it understates the risk. Organisations may
still at this time be adopting the methodology as a component of a RiskMetrics VaR
implementation. The thesis presents a framework that risk managers can apply to their own
portfolios, to assess the suitability of the Delta-Gamma-Johnson approach.

Page 4
TABLE OF CONTENTS

1 INTRODUCTION..................................................................................................................9
1.1 The Development of Risk Management ..........................................................9
1.2 Chronology...............................................................................................................12
1.3 The G30 Report......................................................................................................14
1.4 Market Risk and VaR ............................................................................................16
1.5 Credit Risk .................................................................................................................18
1.6 Integrated Risk.........................................................................................................19
1.7 The UK Regulator..................................................................................................19
1.8 Non-Linear Risk Measurement.........................................................................21
1.9 Motivations and research objectives ...............................................................22
1.10 Organis ation of the thesis...................................................................................23
2 VALUE AT RISK CONCEPTS.....................................................................................24
2.1 Introduction..............................................................................................................24
2.2 A history of derivatives trading .........................................................................26
2.3 A history of Risk Managem ent..........................................................................27
2.4 Risk in the Financial Markets .............................................................................29
2.5 Concentration and diversity................................................................................31
2.6 Risk Management ...................................................................................................32
2.7 Value at Risk.............................................................................................................38
2.8 Benefits of Risk Management Controls.........................................................40
2.9 VaR Approaches.....................................................................................................42
2.10 VaR Inputs ................................................................................................................52
2.11 VaR Outputs .............................................................................................................53
2.12 Other Risk Measures .............................................................................................55
2.13 Coherency ..................................................................................................................56
2.14 When to use which method...............................................................................57
2.15 Back testing ...............................................................................................................57
2.16 Risk Metrics..............................................................................................................58
2.17 What’s wrong with VaR .......................................................................................58
2.18 Conclusion.................................................................................................................61
3 THE RISKMETRICS METHODOLOGY...............................................................62
3.1 Introduction..............................................................................................................62
3.2 RiskMetrics in Summary ......................................................................................63
3.3 History of the methodology...............................................................................64
3.4 JP Morgan..................................................................................................................65
3.5 Reuters........................................................................................................................65
3.6 Context and development ...................................................................................65
3.7 The Heart of RiskMetrics....................................................................................66
3.8 Time horizon............................................................................................................67
3.9 RiskMetrics standard deviations .......................................................................67
3.10 RiskMetrics Covariances and Correlations...................................................69
3.11 Data mapping ...........................................................................................................69
3.12 Inclusion of non-linear instruments ................................................................72
3.13 Limitations.................................................................................................................73
3.14 Assumptions .............................................................................................................73
3.15 Features .......................................................................................................................75

Page 5
3.16 Conclusion.................................................................................................................79
4 RISKMETRICS IMPLEMENTATION.....................................................................80
4.1 Introduction..............................................................................................................80
4.2 RiskMetrics in the trading room.......................................................................81
4.3 Data..............................................................................................................................82
4.4 Interest Rate Risk....................................................................................................83
4.5 Equity Concentration Risk ..................................................................................84
4.6 Corporate Bonds.....................................................................................................85
4.7 Option Risk...............................................................................................................86
4.8 Conclusion.................................................................................................................94
5 USE OF JOHNSON TRANSFORMATION IN RISKMETRICS................95
5.1 Introduction..............................................................................................................95
5.2 The Standard RiskMetrics Approach..............................................................95
5.3 Delta-Gamma...........................................................................................................95
5.4 Non-linear market risk in RiskMetrics ...........................................................96
5.5 Delta-Gamma-Johnson Method.......................................................................98
5.6 Worked Example..................................................................................................102
5.7 Full test .....................................................................................................................107
5.8 Test Statistic for Null Hypothesis..................................................................108
5.9 Test Details.............................................................................................................109
5.10 Results of Experiment........................................................................................110
5.11 Analysis .....................................................................................................................111
5.12 Conclusion...............................................................................................................120
6 CONCLUSIONS.................................................................................................................122
6.1 Further work...........................................................................................................123
6.2 Summary ..................................................................................................................126
APPENDICES.............................................................................................................................127
Deal Data.................................................................................................................................128
Results of Experiment........................................................................................................129
REFERENCES............................................................................................................................130

Page 6
LIST OF TABLES

1. Chronology of VaR and non-linear risk publications.............................................................................12


2. Recommendations of the G30 Report into Derivatives .......................................................................14
3. A history of derivatives trading .......................................................................................................................26
4. Significant Risk Management Events ...........................................................................................................27
5. Taxonomy of Limits ............................................................................................................................................33
6. Features of BPV Limits......................................................................................................................................37
7. Prevention of Risk Management Events.....................................................................................................40
8. Features of Variance-covariance.....................................................................................................................45
9. Features of Historic Simulation ......................................................................................................................49
10. Features of Monte Carlo .................................................................................................................................51
11. Other Risk Measures.........................................................................................................................................55
12. Development of RiskMetrics.........................................................................................................................66
13. Challenges of RiskMetrics...............................................................................................................................81
14. Exotic option features......................................................................................................................................89
15. Trade Data and Market Data for Cap 1..................................................................................................102
16. Delta Exposure for Cap 1.............................................................................................................................104
17: Gamma Exposure for Cap 1.......................................................................................................................104
18. Covariance matrix for USD tenors, November 1996........................................................................105
19. Mina & Ulmer calculated moments for Cap 1......................................................................................105
20. Simulated Moments for Cap 1 using full revaluation and Delta-Gamma approximation..105
21. Johnson fit parameters for Cap 1...............................................................................................................106
22. Moments of pdf for Cap 1 using Johnson Transform......................................................................107
23. Results of linear regression of Kolmogorov-Smirnov statistic vs dependent variables.......116

Page 7
TABLE OF FIGURES

1. Historic Simulation results .............................................................................................................................48


2. Risk System Structure ......................................................................................................................................52
3. Payoff for a linear instrument .......................................................................................................................87
4. Options pay-offs at maturity.........................................................................................................................87
5. Option pay-off curves......................................................................................................................................88
6. Delta-Gamma-Johnson Method..................................................................................................................98
7. Approach for test procedure.......................................................................................................................108
8. K-statistic vs Option Strike/Underlying Forward..............................................................................112
9. K-statistic vs Option Expiry .....................................................................................................................`113
10. K-statistic vs reference portfolio skew ....................................................................................................114
11. K-statistic vs calculated portfolio skew...................................................................................................115
12. K-statistic vs transformation typ e.............................................................................................................116
13. Cumulative frequency of returns using delta-gamma-Johnson and simulation: Cap 7........117
14. Cumulative frequency of returns using delta-gamma-Johnson and simulation: Cap 16 .....118
15. Cumulative frequency of returns using delta-gamma-Johnson and simulation: Cap 10 .....119
16. Cumulative frequency of returns using delta-gamma-Johnson and simulation: portfolio..120

Page 8
1 INTRODUCTION

1.1 The Development of Risk Management


The profits and solvency of a financial institution are subject to certain risks, arising from
the financial assets they hold and contracts they have executed. Risk managers monitor the
risks being run by the institution, maintaining the risk within levels approved by the board
of the institution. Value at Risk (VaR) has become a popular family of tools to assist the
risk manager.

The roots of risk management lie in portfolio theory and statistics. Portfolio theory is a
particularly strong influence on the variance-covariance class of Value at Risk processes,
while the statistics literature is fundamental to all current day risk calculations. From the
statistics of sample distributions, we know how to predict future market behaviour, given
observations from the past. The utility of the normal distribution is particularly important
in this respect. From portfolio theory, we know how to combine the market behaviour of
individual assets, to predict the market behaviour of a portfolio of these assets. We know
that a diversified portfolio will have less risk than the riskiest asset in the portfolio. A Value
at Risk measure must reward traders for diversifying their risks, rather than punish them,
and an easy way to do this is to build upon portfolio theory.

The G30 report in 1993 marks the beginning of current day Value at Risk literature. The
report was prompted by concern over the growth of derivatives trading in the industrial
world, which the G30 countries represent. It set out industry best practice for managing
derivatives trading. This report prompted numerous banks to develop market risk
management programmes, although larger institutions were already developing risk
management frameworks at the time of the report. Many banks were already implementing
internal risk models when JP Morgan’s RiskMetrics was published the following year.
Although weak in some areas, this landmark publication set a minimum standard for all
banks to attain, when measuring market risk.

Within Europe, the Capital Adequacy Directive (CAD) has strongly influenced risk
management practice. In the UK, the Bank of England and its regulatory successor, the
FSA, have published significant prescriptive methodologies for reporting risk exposures
and allocating risk capital. These methodologies are often conservative, in order to
maintain a general applicability across all regulated bodies. The regulator offers banks the
Chapter 1: Introduction

alternative of obtaining recognition for their internal risk management processes and
models. Many banks have now completed the process to achieve model recognition for
some or all of their trading activities, which allows them to combine internal and regulatory
risk management reporting and benefit from reduced capital requirements.

There are many tools available to the market risk manager to calculate Value at Risk.
Several authors have shown that different tools are appropriate for different circumstances.
While some methodologies are undoubtedly less computer intensive, others may operate
better in unusual market conditions or where there is restricted market data available.

The variance-covariance approach is comprehensively documented through its best known


implementation, RiskMetrics. The literature for other approaches, principally historic
simulation and Monte Carlo, is more generic in its nature. A specialist parametric approach
for VaR, Monte Carlo processing is also used for pricing certain types of financial
instruments, particularly complex options. The method is used to obtain a quasi-sample
distribution for the portfolio value, based on evaluating a large number of alternative
outcomes. The development of the method has been aided by the ongoing improvements
in computer processing capacities, as well as research aimed at reducing the number of
simulations required to obtain an acceptable standard error on the estimator. Much of the
academic research focuses on achieving a no n-biased estimator from the simulation. Monte
Carlo is now an important tool within VaR, providing an approach for predicting market
behaviour when the markets are difficult to model or have poor historic data.

Monte Carlo is an expensive investment from the perspective of available computer


resources. No matter what the size of organisation, there is never enough compute capacity
to go round and, for this reason, research continues to find a cheaper way of incorporating
option risks into the Value at Risk measure. RiskMetrics suggests fitting a generic
probability distribution function to the portfolio. Other authors have provided alternative
methods.

Value at Risk measures are critically dependent on market data and the quality of the
analysis the risk manager performs on this data. A common assumption within a Value at
Risk process is that changes in market data follow a normal distribution. Econometricians
have shown that deviations from the normal distribution, such as excess kurtosis (fat tails)
are clearly observable in the financial markets. Another common technique underlying
Value at Risk is to model the joint distribution between two market data time series by

Page 10
Chapter 1: Introduction

capturing the correlation, using ordinary least squares regression. Studies have also shown
that regression analysis should be used with caution with financial data. The fundamental
assumptions of linear regression are frequently breached by financial time series data.
Observations from the financial markets display autocorrelation, for which current
observations are correlated with previous observations in the same time series. They also
exhibit heteroscedasticity, time-varying volatility and covariance, making ordinary least-
squares correlation results meaningless or misleading. Other models relax some of the
assumptions, thereby improving the performance of the model with financial data. A
notable example with an application in Value at Risk is the GARCH (general auto-
regressive conditional heteroscedasticity) volatility model, which makes it possible to derive
steady state volatilities and correlations for time series.

In this chapter, we follow the evolution of VaR literature through its modern history. Key
publications are set out chronologically and any contribution to the subject of non-linear
VaR measures noted. A special place is reserved to outline the role of the regulator, which
has been key in maintaining momentum behind VaR. At the end of the chapter, we take a
look at the future direction for research.

Page 11
Chapter 1: Introduction

1.2 Chronology
The chronology of significant proprietary and regulatory Value at Risk publications is set
out below, together with the significant non-linear risk academic papers.

Year Publication Themes

1988 BIS Capital Accord Regulatory requirements to allocate capital for


exposure to default risk.

1993 G30 report on Derivatives Senior management oversight, marking to


trading market, measuring market risk, stress testing,
independent oversight, credit exposure
management and measurement.

European Union Capital Requirement to allocate capital against exposure


Adequacy Directive to market risks.

BIS consultative paper on Framework for assessing capital adequacy of


market risk market risk.

1994 RiskMetrics first published Standard market risk measurement


(version 2) methodology and data set for linear portfolios.

1995 RiskMetrics 3 Non-linear risk using delta/gamma estimate or


structured Monte Carlo.

BIS paper on proposed changes Internal VaR models to calculate capital charge.
to the capital accord for market
risk measurement

CAD 1 amendment to the UK adoption of EU CAD.


Capital Adequacy Directive
(implemented January 1996)

1996 RiskMetrics monitor Non-linear portfolio risk using Cornish-Fisher.

Page 12
Chapter 1: Introduction

Year Publication Themes

RiskMetrics 4 Non-linear risk captured using Johnson or


simulation.

Amendment to the BIS Capital Internal model recognition.


Accord to incorporate market
risks

1998 CAD 2 Amendment to the EU Internal model recognition for market risk
Capital Adequacy Directive exposures.

1999 Britten-Jones & Schaefer Non-linear risk measurement (3 papers).


Li
Mina & Ulmer
2000 Mina Use of quadratic approximations for Monte
Carlo simulation.

2001 Return to RiskMetrics: revision New cash flow mapping algorithm, emphasis
to Technical document 4 on simulation for option portfolios.

Table 1: Chronology of VaR and non-linear risk publications

Page 13
Chapter 1: Introduction

1.3 The G30 Report


The G30 report on Derivatives trading (Global Derivatives Study Group, 1993) was
fundamentally influential in the risk management industry. The primary recommendations
of interest to market risk managers1 were:

Recommendation Description

Senior management Senior managers had to understand the risks that the
oversight institution ran with its derivatives positions. This motivated a
measure of risk that could be applied uniformly across
different trading businesses, without requiring detailed
knowledge of that business.

Mark-to-market for all All derivative positions should be marked to market, i.e. valued
trading positions at their replacement cost. It was common practice at the time
of the report to use older accounting approaches, based on
accruals, to value swap positions. However, this was not
regarded as adequate for risk management purposes, since it
did not reflect changes in market conditions.

Market valuation Positions should be valued using appropriate adjustments so


that the value fairly reflects the likely sale price if the position
were to be closed out. This should for example reflect the bid -
offer spread and the credit spread, if appropriate.
Revenue sources Traders should measure, and thereby understand, the sources
of revenue in their positions, preferably broken down to risk
component level.

1 Other recommendations covered credit risk management and legal issues.

Page 14
Chapter 1: Introduction

Recommendation Description

Measure market risk The recommendation specifically mentions “value at risk”, a


measure that would incorporate the following sources:
• Price or rate change
• Convexity
• Volatility
• Time decay
• Basis or correlation
• Discount rate.
Stress simulations Derivatives positions should be subject to regular ‘what-if’
scenario analysis. This should cover not just changes in market
prices but also changes in liquidity. Liquidity can affect the
ability of the trader to realise the close-out price that has been
used to value the position.

Independent oversight All derivatives trading activity should be monitored


independently within the organisation. This independent
function should have a clear mandate to impose the report’s
principles on trading management, if required, and to monitor
the effectiveness of their adoption.

Table 2: Recommendations of the G30 report into Derivatives

Chew reviewed the debate that the G30 committee sparked, as central banks developed
mechanisms for the regulation of market risk capital (Chew, 1994). Stress scenarios
competed with internal models and the BIS model to win the approval of the central banks
as the prescriptive method of measuring market risk. It was a bad year for bond markets,
coming on the heels of the collapse of the ERM in the previous year and, still fresh in the
regulators’ minds, the stock market crash of 1987. The pressure to implement some form
of market risk capital allocation was clear, but the validity of a VaR number for capital
adequacy purposes was disputed. It was not until 1996 that the BIS would issue guidelines
for a VaR measurement that would be accepted for capital adequacy reporting (Basle
Committee on Banking Supervision, 1996).

Page 15
Chapter 1: Introduction

1.4 Market Risk and VaR


The RiskMetrics Group’s Risk Management: A Practical Guide (Laubsch & Ulmer, 1999)
presents an overview of the common approaches to VaR today. The key approaches,
discussed in detail in the next chapter, are variance-covariance2, historical or scenario
simulation, and Monte Carlo. Laubsch & Ulmer compare the features of the
methodologies. From this comparison, we see that the variance-covariance methodology is
inadequate for non-linear portfolios, whereas historic simulation requires meticulous
collection of data, and Monte Carlo is highly computer-intensive. None of the approaches
is therefore suitable for low-cost3 measurement of option risk.

The second edition of JP Morgan's RiskMetrics technical document (JP Morgan, 1994)
presented a complete treatment of a VaR process, based upon a variance-covariance
approach. The document detailed the methodology for mapping assets into a model
portfolio, and the data that was required to support the methodology. The methodology
did not cover option price sensitivities. JP Morgan updated and improved upon this
document, and in the third edition (JP Morgan, 1995) options could be processed using the
delta-gamma (i.e. Taylor series) estimate, which made the VaR a chi-squared distribution,
or using structured Monte Carlo, which valued the position exactly, without the need for
cash flow mapping. Now in its fourth edition (JP Morgan, 1996)4, the methodology
specifies the mapping of the portfolio distribution to a transformation of a normal
distribution, or a simulation based on the Taylor series expansion. For some years, the
methodology provided something close to a complete practical handbook of risk
measurement. Many other authors have published in this area, but the RMG publication
has the advantage of being comprehensive for market risk, easily available and free.

The authors delivered updates to the methodology for a number of years through regular
publications, RiskMetrics Monitor (JP Morgan, 1996 - 1999) and RiskMetrics Journal
(RiskMetrics Group, 2000-2001). The RiskMetrics Group recently brought the
methodology up to date with their publication, Return to RiskMetrics: The Evolution of a
Standard (Mina & Yi Xiao, 2001). The document describes changes to the methodolo gy
since the most recent publication of the technical paper in 1997. The main points are the

2 The term parametric, which Laubsch & Ulmer use as a synonym for variance-covariance, is avoided here, as other
authors have also referred to Monte Carlo as a parametric methodology.
3 Cost drivers are both compute cycles and the effort required to collate and clean data.
4 The fourth edition of the technical document is referenced so frequently in this thesis that it will be referred to as [TD4].

Page 16
Chapter 1: Introduction

unification of Monte Carlo and variance-covariance methodologies, through the use of


joint data sets, and a change in the cash flow mapping process. The document emphasizes
the use of simulation methods to model non-linear risk, mentioning that these can be
speeded up by modelling the price function of a complex derivative, perhaps using a
quadratic approximation. No reference is made to the technical document’s
computationally less demanding treatment of non-linear risk, by the use of Johnson
transformations to model portfolio sensitivities.

Lawrence and Robinson (1995) challenge RiskMetrics’ suitability as a risk measure, citing
the choice of a 95% confidence interval and the assumption of normality. Despite this, the
finance industry has accepted it as a de facto minimum benchmark for market risk
measurement. There is little work that examines the quality of the RiskMetrics outputs.
Hendricks (1996) compared moving averages, historic simulation and RiskMetrics
approaches to measuring Value at Risk for a foreign exchange portfolio. His conclusion
was that RiskMetrics gives a measure that is more responsive to the dynamics of the
market, but that simulation methods, which model the exact value of the portfolio over a
range of future price outcomes, ultimately give a better estimator of a confidence limit on
the portfolio P&L5. Alexander (1996) focussed on the comparison of volatilities estimated
using the RiskMetrics and GARCH approaches, suspecting that the RiskMetrics estimators
may have undesirable features. She found that the calculation for 28-day volatility exhibited
ghost features when significant events dropped out of the data set. This thesis will compare
the treatment of option positions in RiskMetrics using variance-covariance and simulation
approaches. It is more akin to the work of Hendricks than Alexander, in that it focuses on
methodology rather than data.

RiskMetrics has become a standard in the absence of the success of any competitive
methodology. Among the possible competition, Bankers Trust's RAROC 2020 is a
software solution with a methodology outside the public domain. RAROC, the Risk
Adjusted Return On Capital, is an incremental development of VaR. Actual portfolio
returns and VaR are tied together in a single measure of profitability. This has been
difficult to implement in finance, as it represents a significant change in the culture of the
trading room. The success of limits requires that they are transparent, which VaR
methodologies, by their nature, frequently are not. Nor is transparency the only issue.
Financial institutions have struggled to obtain a VaR number that may satisfactorily be

Page 17
Chapter 1: Introduction

used as the basis of a limit measure, and therefore to restrict the activities of traders within
the risk appetite of the institution. For this to be effective, the traders must believe that the
VaR numbers fairly represent their risks, especially when compared to other trading
activities within the institution. Owing to inevitable intellectual, technical and budgetary
compromises made when implementing risk systems, VaR numbers will not necessarily
pass this test. These same problems would inhibit any attempt to adjust P&L to take
account of VaR. RAROC 2020 is worthy of note in particular because it incorporated
treatment of non-linear risks, via Monte Carlo simulation, at an early stage in the literature
(Falloon, 1995).

A reader seeking an anecdotal appreciation of the VaR process might refer to Jorion
(1996). He gives a good introduction to the subject of VaR, and includes entertaining case
studies of headline making losses in the financial community, such as Orange County. This
book is particularly good to understand the motivations for risk management and the
regulatory framework. It also includes a brief mention of the delta-gamma estimate for
non-linear risk.

1.5 Credit Risk


JP Morgan launched CreditMetrics in 1997. As with RiskMetrics, the CreditMetrics
methodology offers a framework and data for risk calculation. The focus of CreditMetrics
is credit risk, specifically the risk that an entity to which the portfolio is exposed will suffer
a credit rating transition or default. The approach generates credit scenarios by analysing
equity price movements and assuming a relationship between equity prices and transition.
This method is preferred to using credit transition data directly, as credit data is known to
be infrequently observed (i.e. only when companies default) and richer in the US than
elsewhere. By contrast, equity price data is freely available around the world and observable
at any frequency one chooses, down to the frequency of individual transactions.
CreditRisk+, an alternative methodology available at around the same time, uses a Poisson
distributio n to model default events within sectors. These methods are very different, but
neither seems to have gained the standing of RiskMetrics, which is synonymous with
market risk measurement in many people's minds. One factor preventing this has been
timing. At the time RiskMetrics was launched, many institutions were struggling to
implement in-house methodologies and systems for market risk measurement, or perhaps

5 The lower limit is a VaR measure.

Page 18
Chapter 1: Introduction

had not started at all. However, for many years institutions have tracked credit exposures in
a limited way, so they have acceptable systems in place and less reason to adopt something
new.

1.6 Integrated Risk


The ultimate goal in some industry minds is to build an integrated risk management
function, measuring both credit and market risks. This has particularly been a focus for
software vendors, attempting to unify the different demands of market and credit risk
measurement. While many have succeeded in implementing market and credit risk systems
using the same data, Glass (96) highlighted the reasons why an integrated risk measure is so
difficult, namely the different time perspectives involved in market and credit risk, the
requirement to run systems at transaction (counterparty) level for credit risk and the limited
benefits that may accrue in compariso n to the implementation cost. Risk managers who
wish to compile risk-adjusted return measures across businesses that are subject to these
types of risk must develop a methodology and systems environment to overcome these
hurdles, as well as a convincing business case.

1.7 The UK Regulator


Regulatory requirements have developed in parallel to risk literature over the last few years.
The Capital Accord of 1988 laid down a regulatory framework for reporting capital
adequacy against credit risks. This covered important concepts, such as the risk of debt
arising from third world countries versus that of the developed world, and the need for
financial institutions to maintain Tier 1 (shares & recognised reserves) and Tier 2 (other
reserves and hybrid debt) capital, to protect against insolvency in the event of large scale
counterparty default. The accord followed the Latin American debt crisis of the early
eighties, and came at a time when swap trading was a relatively young discipline. For this
reason, the accord focused on capital requirements to protect against credit risk. From
1993, around the time of the G30 report, the Basle committee developed market risk
measures for the purposes of capital adequacy. An amendment to the accord, published by
the Basle committee in 1996, includes proposals for allocating capital against market risk
(Basle committee, 1996), reflecting the developing knowledge of these types of instrument
within the regulatory framework. The proposal includes a new category of capital, Tier 3
(subordinated debt), which can be set aside purely to protect against market risks.

Page 19
Chapter 1: Introduction

Rather than put forward a standard methodology, in the style of RiskMetrics, for
measuring market risk, the amendment allows the use of the institution's own internal VaR
models. The Basle committee does go as far as recommending a minimum confidence
interval (99%) and risk horizon (10 days) to use in the models. This flexibility over the
detail of the model implementation, adopted in the UK as CAD2, allows different
institutions to craft VaR measures to suit their own risk profiles. This is not a loophole that
allows the institution to understate their risks. All VaR model recognition from the FSA is
dependent on feedback from backtesting exercises, in which VaR measures are compared
to actual P&Ls. The regulator sets limits for the number of times P&L can exceed the VaR
measure in a given time period without incurring additional capital charges. If a significant
risk shows up in back testing then the penalties can be severe, and could potentially lead to
the loss of model recognition altogether. Institutions can use Tier 3 capital against market
risk exposures, provided that this does not exceed 250% of the institution’s Tier 1 capital
that is allocated to support market risk. Tier 2 can be substituted for Tier 3, subject to the
same restrictions.

The European Community (EC) reviews Basle committee reports and considers whether
the proposals should be incorporated into EC law. This has already led to the EC Capital
Adequacy Directive (CAD). Member countries must implement the EC directives (there is
no opt-out), but the implementation process is subject to different interpretations and the
legislative priorities of member states. The UK was the only member to implement CAD
by the deadline of 31/12/1995. Owing to the speed of adoption, the Bank of England had
to show leniency to banks that were unable to implement systems in time. Several banks
pooled funds to sponsor development of a solution by a reputable consultancy firm, but
then all found themselves unable to satisfy the deadline of the directive when the software
was delivered late.

In the UK, the FSA, and the Bank of England before them, have set down a number of
procedures for calculating market risk for regulatory reporting purposes, which certain
regulated institutions must adhere to. In 1995 the Bank of England issued the ‘Green
Book’, formally known as Draft Regulations To Implement the Investment Services and Capital
Adequacy Directives (Bank of England, 1995), which contained new requirements for capital
adequacy reporting of market risk. The bank adopted a duration ladder/delta approach for
interest rate risk, with additional capital buffers for option positions. The bank additionally
issued notes for guidance to assist with the buffer approaches and more sophisticated

Page 20
Chapter 1: Introduction

alternatives. The FSA has revised the whole supervisory policy documentation (a
replacement for the Green Book). The favoured method for assessing regulatory capital is
now the Scenario Matrix, whereby the portfolio is subjected to a number of scenarios. The
scenarios are built up as a matrix of price and implied volatility values. The central point of
the matrix is the current level of price and implied volatility. Off-centre elements represent
revaluation of the portfolio with a shift in price, volatility or both. The worst revaluation
outcome in the matrix is taken as the capital requirement.

The latest publication from the regulator is the 2001 amendment to the Capital Accord.
This recognizes and addresses parts of the 1988 accord that now seem weak, inequitable or
dated. It includes revised treatment of some types of debt in the existing standardised
approach, plus two forms of internal ratings-based approaches to credit exposure
assessment. It also makes provision for setting capital against the operational risks of a
firm. We should see research interest picking up around the subjects of operational risk,
which is dealt with only sketchily in the proposals, and internal ratings systems. Internal
modelling of credit risk itself will not receive any kind of boost from the regulator, since it
is not permitted for reporting capital adequacy on credit exposures.

1.8 Non-Linear Risk Measurement


One of the most challenging aspects of a risk management process is the way that it
captures the risks of an option portfolio. RiskMetrics first proposed the Cornish-Fisher
polynomial approximation to the percentile. In this approach, the tail of the distribution is
modelled as a polynomial function. The fourth edition of the Technical Document
contained a method based on Johnson curves. This system of curves can be fitted to the
first four moments of an option portfolio’s distribution, approximated by a quadratic form,
to derive a Value at Risk number. Johnson curves have proved to be unsatisfactory to
RiskMetrics users (e.g. Mina and Ulmer, 1999) and the group currently recommends a
simulation approach. Li (1999) shows how the theory of estimating functions can also
construct a VaR estimate from the first four moments of an option distribution, or indeed
any portfolio distribution with excessive skewness or kurtosis. Mina and Ulmer (1999)
used a Fourier inversion of the moment generating function to obtain the portfolio VaR,
and evaluated the accuracy and speed of execution of standard RiskMetrics, Cornish-Fisher
and two forms of Monte Carlo against this measure. Full Monte Carlo simulation is
distinguished from Partial Monte Carlo, in which the price function of complex derivatives

Page 21
Chapter 1: Introduction

is modelled with a quadratic approximation. The paper concludes that the Partial Monte
Carlo and Fast Fourier Transform offered the best trade-off of speed versus accuracy.
Finally, Britten-Jones and Schaefer (1999) use the first four moments about zero of the
portfolio pdf. They express the change in portfolio value as the sum of a set of non-central
chi-square variables, developing a system of equations that can be used in conjunction with
chi-square tables to derive the VaR.

1.9 Motivations and research objectives


Value at Risk has been a developing science for more than a decade. The regulators around
the world apply pressure to financial institutions, large or small, to provide Value at Risk
measures as part of their regulatory returns. The cost of developing an internal
methodology is high. Many smaller financial institutions find that the cost of compliance
with the regulatory regime is similar to larger institutions with deeper pockets. For this
reason, they may turn to off the shelf methodologies built in to software packages. Such
institutions must be wary of implementing an external methodology that is inappropriate
for the types of risk. These risks may include exposure to options, financial instruments
that present multiple dimensions of risk and headaches for the risk manager. We take the
most popular methodology, RiskMetrics, in its most popular form, variance-covariance,
and assess its suitability for risk measurement of interest rate options. In particular, we
examine the Delta-Gamma-Johnson extension to the variance-covariance methodology,
and ask whether this approach will provide risk measures consistent with the
computationally more costly variance-covariance approach. The framework used to
produce this result can in fact be used more generally, for any portfolio for which a
variance-covariance matrix can be derived.

RiskMetrics has dominated the Value at Risk literature, as a central reference point for
academic interest. Much of the research published on the methodology has focussed on
the techniques used to collect the data. Very little has been published on the delta-gamma-
Johnson method within it, and no research known to this author has previously looked at
the implementation of the delta-gamma-Johnson method in such detail.

Page 22
Chapter 1: Introduction

1.10 Organisation of the thesis


The structure of the rest of this thesis is as follows:

The first part of the thesis sets the context for our study of RiskMetrics. In Chapter 2, we
examine the development of risk management tools, including Value at Risk, over the last
25 years. We see how early measures, such as notional limits, were found to be inadequate
and gave way to loan equivalents and basis point values. These in turn have their own
limitations, which have led to the development of Value at Risk. We complete Chapter 2
with a detailed review of Value at Risk processing. We see how a financial institution
gathers together its portfolio data, market data and reference data. We see how market data
is used to generate scenarios and risk factors. We outline the three primary methodologies
for measuring Value at Risk: variance-covariance, historic simulation and Monte Carlo.

The second part of the thesis introduces the RiskMetrics methodology. In Chapter 3, we
see how the methodology has developed, from a simple variance-covariance method with
data gathering, to a sophisticated blend of all the principal VaR methodologies, with a rich
data set. In Chapter 4, we take a first look at the gaps between the RiskMetrics model and
typical portfolios of financial institutions.

The third part of the thesis examines the challenges of non-linear risk measurement in
more detail. In Chapter 5, we look in detail at the measurement of non-linear risk using the
RiskMetrics Johnson modelling method. We present the original work in the thesis, in
which we propose a procedure that can be used to assess the robustness of the RiskMetrics
variance-covariance calculations for non-linear Value at Risk. We demonstrate the use of
the procedure on some test transactions and a portfolio.

In Chapter 6, we determine what conclusions we can draw from the results of our
experiments, and outline further work that could be carried out to develop the research
topic.

Page 23
2 VALUE AT RISK CONCEPTS

2.1 Introduction
Value at Risk (VaR) has been an important component of financial risk management for
five years. It has become commoditised, such that VaR systems solutions can be bought
'off the shelf'. It has become enshrined within the financial regulations of the world’s
banks. In this chapter, we review the evolution of risk management practice that led to
VaR. Risk management is defined as the process of monitoring the risks that a financial
institution is exposed to, and taking action to maintain this exposure within levels set by
the board’s risk appetite. For this research, we define Market Risk as the uncertainty in the
close-out value of an asset or liability as a result of changes in market prices. This chapter
looks at:

• early attempts to limit exposure to market risk, which specify the maximum
notional value that may be held in particular types of deal;
• counterparty limits, designed to limit the exposure to a financial institution or
group of institutions;
• concentration limits, which view the country or industry as the risk, rather than the
individual counterparty, and limit the exposure to that;
• reactive limits, such as stop loss limits, designed to limit the potential for loss on a
position;
• market risk limits based on portfolio sensitivities, such as Basis Point Value (BPV)
limits, which limit the exposure to specific market risk scenarios, such as a shift in a
yield curve.
All these developments in limits are documented as the context of Value at Risk.

VaR is defined as the portfolio loss that will not be exceeded with a given level of
confidence, over a given trading horizon. This is not an absolute limit on loss and must not
be read as such. In this chapter, we will examine the inputs to VaR: trade data, static data,
market data and instrument data. The chapter also considers the problems that can occur
when putting this data together. We describe different forms of processing that can be
used to generate a VaR number, using the common classifications of variance-covariance,
historic simulation and Monte Carlo. The text assesses the differences in the way they
process data, the different data requirements and the qualitative requirement for
Chapter 2: Value at Risk Concepts

computational power. Also, the description highlights circumstances that lead to favouring
one approach over another, the type of market (normal or non-normal), and the type of
instrument (linear or non-linear).

This chapter also looks at the outputs of VaR, when using each of the approaches
described above. Some approaches will offer a consolidated number only, but others offer
more insight into the sources of risk. We outline how financial institutions can use the
backtesting approach to validate their VaR process, which consists of data, models and
procedures.

Finally, we assess VaR implementations in the public domain, notably RiskMetrics, and
give the reasons why this is of particular interest in this research.

Page 25
Chapter 2: Value at Risk Concepts

2.2 A history of derivatives trading


Stock options have been around for a long time, but the recent history of derivatives
shows an extraordinary period of innovation in the financial markets:

C17 AD Invention of shares.

1800’s Private trading of share options.

1862 Futures traded on Chicago Board of Trade (CBOT).

1890’s Commodity option contracts traded on CBOT.

1900’s OTC stock option market develops on Wall St.

1934 Commodity options banned on CBOT.

1973 CBOE opens its doors to individual stock option trading. Black-Scholes and
Merton papers.

1981 Interest rate swap and currency swap agreements initiated in London by
Salomon Brothers.

1982 Stock index futures contracts established by Value Line.

1984 Stock index option contracts introduced by CBOE.

1987 Nikkei put options packaged by Wall Street firms.

1988 OTC options on individual stocks packaged by Wall Street firms.

1989 Bankers Trust introduces equity swaps.

1993 Global Derivatives Study Group issues report on the use of derivatives.

Table 3: A history of derivatives trading (sources: Collins, 1998 and Dunbar, 2000)

Page 26
Chapter 2: Value at Risk Concepts

2.3 A history of Risk Management


A history of risk management is really a history of the financial industry. In many cases,
significant losses at an institution have been due to unauthorised or reckless activities by an
employee, rather than failures in risk management. However, some of the largest losses
have been attributable to misunderstood credit and market risks. The following table sets
out some significant events from the past thirty years that have shaped the risk
management functions we see today.

Date Risk Management Event

Seventies Latin American exposures.

Early Eighties Latin American debt crisis.

Eighties Savings & Loan industry (US building societies) lose


$150bn in mismatched interest rate exposures – many
go bankrupt.

1987 Stock market crash.

1988 Capital Accord.

1991 Counterparties lose $900m on Hammersmith &


Fulham Council swap transactions declared illegal by
House of Lords.

1992 $14bn of taxpayers money is spent shoring up the


pound in the ERM, but ultimately speculators force the
pound to float freely.

1993 G30 report into derivatives trading.

1993 Metallgesellschaft loses $1.3bn through an American


subsidiary and is bailed out by creditors.

1993 Orange County goes bankrupt after the county


treasurer, Bob Citron, leverages a $7.5 billion portfolio
and loses $1.64bn (after accumulating additional
revenues of $750m for the county in previous years).

1994 Proctor & Gamble lose $157m on differential swap


trading with Bankers Trust.

Page 27
Chapter 2: Value at Risk Concepts

Date Risk Management Event

1994 JP Morgan’s RiskMetrics.

1995 Barings loses $1.3bn through positions taken by Nick


Leeson and goes bankrupt.

1995 Daiwa securities recognises losses totalling $1.1bn,


accumulated by a trader at its New York offices, and is
forced to close its US operation.

1996 Amendment to the Capital Accord for market risk.

1997 NatWest recognises losses of £90.5m on swaptions


books deliberately mis-marked by two traders.

1998 Asian debt crisis.

1998 Russian debt crisis.

1998 Long Term Capital Management loses $3.5bn of


investors’ money and is bailed out by a consortium of
banks.

1999 Introduction of the Euro eliminates currency risk


between participating countries.

2002 Enron Power Trading files for bankruptcy after


recognising a series of losses and debts that had
previously been concealed in off-balance sheet deals
with partnerships.

2002 A former US branch manager for Lehman Brothers


brokers is charged with stealing $40m from clients’
funds.

2002 Allied Irish Bank recognises foreign exchange losses of


$691m over five years, when a trader at American
subsidiary Allfirst recorded fictitious options deals as
hedges for real spot and forward FX contracts.
Table 4: Significant Risk Management Events (Source: Jorion (1997))

Page 28
Chapter 2: Value at Risk Concepts

2.4 Risk in the Financial Markets


It is normal for a financial institution to take risks. Calculated risks make financial profits,
or returns, for their shareholders. Risk introduces uncertainty in the level of the return, for
which the institution will charge a risk premium. Two well-understood sources of risk for a
financial institution are market risk and credit risk:

? Market risk is defined as the uncertainty in the close-out value of an asset or liability
as a result of changes in market prices.
? Credit risk is the uncertainty of financial receipts that arises from the possibility that
a party to a contract may be unable to perform their obligations.

To illustrate the difference between these risks and their associated returns, consider two
divisions of a bank: a traditional banking business, and a proprietary trading arm. The
banking division works in just the same way as a retail bank. It holds accounts for its
clients, liaising with other banks to effect the transfer of funds between the clients'
accounts and other accounts around the world as required. This service is usually fee based.
The bank will place any excess of funds held in the account on the money markets
overnight to earn interest. A part of this income will be paid to the account holder.
Similarly, the bank will borrow from the market to cover any shortfall on the account, and
charge a margin on the interest cost to the client. In practice, the bank will manage liquidity
across all borrowers and lenders, resulting in larger profits for themselves. The bank may
also provide a long -term loan to the client, again charging the client a margin on its own
funding costs. Some banks will also deal in the market on behalf of a client, taking a margin
for themselves but leaving the market exposure with the client.

The key return for a banking division is Net Interest Income. This is the margin that the
bank makes by borrowing money from the market at interbank rates and lending to
customers at higher rates or, conversely, holding deposits for the clients and placing them
on the market. The bank is acting as an intermediary, adding its own margin on to the
transaction to secure a profit for the bank's shareholders. The division will also have fee
based income, which will have associated direct costs. The key risk that the division takes is
credit risk. Banking positions are usually held to maturity, so fluctuations in market rates do
not affect the income stream. However, if the bank has lent money to a client who then
becomes insolvent, the bank will lose a high proportion of the monies due to it. The bank
is also exposed to liquidity risk. This occurs when it makes payments from a client account

Page 29
Chapter 2: Value at Risk Concepts

based on an expectation of a receipt later in the day. This can impact the bank's profit on
the transaction, in the extreme case that the receipt never occurs. It can also impact the
bank's ability to cover further transactions with other co unterparties, owing to a shortage
of funds in the account, even if the receipt is simply delayed. This can expose the bank to
charges and interest payments. Typically, in cases where the receipt never occurs, the bank
will recover some of the outstanding value, dependent on the seniority of the debt and the
other claims on the client's assets, but the level of recovery varies widely (JP Morgan 1997).

A proprietary trading arm is a very different business. The division uses the bank's own
shareholder capital to obtain credit lines with counterparties in the markets. The bank then
uses these lines to take positions in the markets, betting on changes in market conditions
which will lead to increases in the market value of their positions. Proprietary trading is a
high risk, high return business. Without effective hedge strategies, this business will suffer
from volatile profits. This volatility is a concern for many investment banks, which may
rely on proprietary trading for 60% or more of their total profits.

The key returns of the proprietary trading arm of a bank will be Net Interest Income (NII)
and Capital Gains. Net interest income is applicable to interest bearing positions which are
held to maturity and funded by other interest bearing instruments. This type of accounting
for returns is becoming less common, as market turnover increases and positions are rarely
held to maturity. The important measure of return then becomes the capital gain, measured
by the change in the close-out value of the positions, together with any net cashflow
income or expense on the portfolio. The business is subject to market risks and credit risks.
Credit risk is present because the bank is still exposed if the counterparty or obligor6 fails
to perform. In contrast to the banking division, the amount of the exposure is not
necessarily the nominal value, particularly on derivative contracts. Market risk is the main
risk, as it is exactly this uncertainty from which the division hopes to profit. Again, the
nominal value of the contract will be an input into the exposure, but it will not be the only
consideration.

In proprietary trading, the distinction between market risk and credit risk becomes blurred.
A loan to a non-sovereign counterparty will include a credit spread, which rewards the
lender for taking a higher risk than with sovereign debt. The London interbank offer rate

6 The obligor is a more general description of a debtor that includes issuers of stock, bonds and other financial
instruments.

Page 30
Chapter 2: Value at Risk Concepts

(LIBOR) is calculated 7 from the offered inter-bank deposit rates supplied by a panel of
contributor banks8. The average credit quality of the banks is taken into account in this
rate, so it is possible to calculate it as a spread over equivalent maturity treasuries. The rate
is used as the basis of valuing the cash component of positions for many trading
operations. Movement in this rate is usually attributed to market risk, but when Barings
became insolvent, credit spreads on LIBOR widened dramatically. This was an example of
a realization of an extreme market risk event as a result of a credit event.

This research will focus on market risk, and in particular the risk of certain types of
derivatives trades. This research is interested specifically in the sophisticated methods used
to manage the risk of derivatives, and one kind of derivative in particular: the option. The
ability of the risk taker to radically alter the risk profile of a portfolio with options trades,
and the new risks that this leads to, make it the most interesting financial instrument from
a risk management perspective.

2.5 Concentration and diversity


A risk manager is concerned about the risk profile of the institution’s portfolio, as well as
individual positions. Portfolio risk measures are built up from risk sensitivities of the
individual positions, yet the positions themselves do not tell the whole story. The risk
manager must also have a way of modelling the concentrations, diversities and hedges in
the portfolio. Risks are concentrated if a number of risks are related in such a way that all
the risk factors tend to move together. This can happen when a trader buys a number of
shares in the same industry sector, such as pharmaceuticals. The value of each share is
strongly related to all the other shares, since news about the sector is likely to affect all
shares equally. News that one pharmaceutical company has got approval for a new drug
may push up prices for the whole sector, if market sentiment is that the approval signals
potential successes for the other companies in their outstanding approvals. Risks are
hedged if they are related in such a way that they tend to move in opposite directio ns. One
example might be a long stock position that is hedged by a short position in the index of
the market the stock trades in. For instance, a trader may buy BT shares but short FTSE
futures. These positions do not exactly hedge each other, both because the FTSE is

7 The calculation is an arithmetic mean of the middle two quartiles of the contributed rates for a given currency and
maturity
8 Members of the panel are chosen for their expertise in a particular currency, activity in the London market and credit
standing. There is a different panel for each currency quoted.

Page 31
Chapter 2: Value at Risk Concepts

composed of more than just BT shares, and because there is a timing difference between
holding cash (stock) and shorting futures. However, the risk manager must recognize the
partial hedge when managing the risk of the portfolio. Risks are diversified if movement in
each risk factor tells us nothing about movement in other risk factors. Portfolio theory
strongly demonstrates the benefits of diversity in maximizing the return for a given level of
risk, or minimizing the risk for a given return.

The usual way for the risk manager to take account of concentration, hedging and diversity
across risk factors is to obtain correlation data for these factors. The correlations tell the
risk manager how to combine the risks calculated for individual positions, or how to build
a set of possible movements in the factors that are consistent with previous observations.
Concentrated risks will have a positive correlation. Hedged risks will have a negative
correlation. Diversified risks will have a near zero correlation, on average.

2.6 Risk Management


Most banks do not want to exhibit high levels of volatility in their profits. This can be very
unsettling for shareholders, and a sustained period of significant losses can threaten the
commercial viability of the organization. For this reason, banks try to moderate the level of
risk, by making sure that a particular position will not lose an unpalatable amount in a
worst case scenario, and by diversifying their risks across different types of positions. The
next section examines different methods of limiting exposures. It also shows how risk
managers can look for concentrations in a portfolio that indicate excessive exposure to a
particular group of correlated risks.

Page 32
Chapter 2: Value at Risk Concepts

2.6.1 Exposure limits


The coarsest control on exposure is the nominal exposure limit. This limit sets the
maximum allowable exposure to risks, using a framework such as this:

Limit on Risk Controlled

Counterparty exposure Counterparty default

Industry/sector exposure Correlated counterparty defaults and loss of market


value of positions due to recession in an industry

Country exposure Correlated counterparty defaults and loss of market


value of positions due to recession in a country

Settlement & Liquidity exposure Counterparty default, failures in payment processing

Instrument exposure Loss of market value of positions due to market


movements

Table 5: Taxonomy of limits

The risk manager then measures the utilisation of the limit by a portfolio. This framework
for nominal exposure limits is applicable to more sophisticated measures, such as present
value, loan equivalents or even VaR. However, in its basic form, these limit utilisations
require a great deal of skill to be interpreted correctly. For credit risks, the nominal
exposure and a knowledge of the likely recovery rate will tell the manager the level of risk
being run. Further market experience of default rates will give the manager a good intuition
for the likely amount which may be lost in a given time period. For market risks, market
experience of the sensitivity of position values to market movements for different
instrument types will again give an intuition for the risk, although it may be harder to judge
the actual amounts which are at risk. The limit utilisations are not easy to interpret or
compare without this intimate knowledge of the risks to which they apply. It is particularly
difficult to compare nominal limit utilisations on derivatives trades, for which the risks may
be a function of the nominal, but not a direct linear function. This would make it hard to
assess capital at risk between different types of business, for instance a bond trading desk
versus convertible arbitrage. Even within a single trading activity, limit utilisations for

Page 33
Chapter 2: Value at Risk Concepts

market risk versus credit risk would offer little insight into which utilisation posed a greater
threat to the capital base of the institution. Both of the trading activities mentioned above
raise further issues with regard to netting of exposures. The bond traders will often fund
their positions through repo trades. The convertible arbitragers may also have repoed their
positions to enhance the yield, and additionally may have taken a short position in equities
to hedge the risk embedded in the equity options they hold. Some mechanism within the
limit structure has to recognise that some utilisations reduce risk, rather than increase them.
Such netting rules are difficult to implement in a nominal-based structure. The problems
inherent in the nominal exposure approach can be summarised as:

• reliance on management experience to understand the limits


• lack of systematic treatment across different trading activities
• difficulty of comparison across business units
• no systematic netting process
• cannot be used for assessing capital adequacy

2.6.2 Weighted Exposures


To address the weaknesses of nominal exposure limits when assessing capital adequacy, the
regulators have devised weights based on the instrument that is giving the exposure. The
weights attempt to capture the comparative riskiness of the instruments given equal
nominal exposures. The 1988 BIS Capital Accord gives weights for calculating capital
requirements to cover potential credit losses. The Accord specifies a 0% weighting for
OECD (Organization for Economic Co-operation and Development) sovereign debt, a
20% weighting for claims on OECD banks and a 100% weighting for LDC (less developed
country) sovereign debt (Basle Committee on Banking Supervision, 1988). The judgement
expressed here is that OECD sovereign debt is effectively free of credit risk, whilst OECD
banks are five times less risky than LDC's. This assessment can take into account the
recovery rates in the case of default for each class of debt. When companies go into
default, bond holders often get paid a percentage of the nominal of their bond s, depending
on the funds that are raised by selling off the institution’s assets. These empirical recovery
rates reduce the actual credit exposures inherent in the bonds.

Although the framework is crude, one problem with the nominal exposure method is
neatly avoided with the use of weights. The market experience necessary to interpret

Page 34
Chapter 2: Value at Risk Concepts

traditional exposure limits is being captured in the weighting framework. The framework
provides the regulator with a means for assessing capital adequacy for different trading
activities on an equal basis.

Although this approach is an improvement on nominal exposure limits, there is very little
granularity in the weighting structure. All OECD banks do not carry the same level of risk.
This is the motivation behind recent BIS discussion papers (Basle Committee on Banking
Supervision, 2001). The BIS is actively engaged in an industry consultation aimed at
revising the 1988 accord. A new 50% risk bucket will be used for certain corporate
exposures, and a 150% bucket has been introduced for particularly risky assets, for instance
low-rated sovereigns or corporates, or funds past due that have not been written off. This
bucket can also be used for assets where the volatility of recovery rates is high. The BIS has
also proposed two new internal ratings-based approaches, for which regulated entities can
take greater control of the capital charge calculation. Participants in the approaches will be
able to estimate their own probabilities of default, and then (depending on competency)
either estimate the loss given default themselves, or use a metric supplied by the regulator.
Finally, the total capital charge will be adjusted to reflect the ‘granularity’ (diversity) of the
portfolio, with respect to a reference portfolio. This weighting scheme offers more
flexibility than a bucketing system, but is not expected to reduce the overall capital
adequacy requirements of the industry by a significant margin.

Since the Accord was introduced in 1988, much of the work of the committee has
focussed on allowable netting. One of the advantages of the weighting approach is that, by
converting exposures to common units of risk, some degree of netting recognition is
possible. The current proposals allow for the widest netting yet, with recognition of a wide
range of collateral, netting of exposures within a counterparty, adjustments for currency
and maturity mismatches, and recognition of credit derivatives.

The regulators initially adopted a similar weighting based approach for market risk (Basle
Committee on Banking Supervision, 1993). This is still the standard approach for market
risk exposures, although more sophisticated treatments are available. Interest rate
exposures are mapped to a duration ladder. The duration of a set of cashflows is, roughly
speaking, the maturity point to which the cashflows are exposed to interest rate risk, on
average, assuming a flat yield curve9. Each rung on the ladder then receives a weighting to

9 For a comprehensive discussion of duration, see Fabozzi (1993).

Page 35
Chapter 2: Value at Risk Concepts

reflect the volatility of the interest rates for that duration bucket. Within the buckets,
exposures are netted.

Weighted exposures are still very important today, because they are central to the
regulatory reporting process for many institutions. However, they do not provide
information that a trader could use to manage a portfolio. The problems inherent in the
weighted exposure approach can be summarised as:

• reliance on some management experience to understand the limits


• difficulty of comparison across business units
• limited systematic netting process

2.6.3 Basis Point Value (BPV) limits


The sophistication of internal models for market risk stems from the unsatisfactory results
from using instrument exposure limits. The allocation of exposure limits is largely
dependent on what the management perceive the risks of the different trading instruments
to be. There is no direct, mathematical link between the limit structure and the losses that
the bank is trying to restrict. As the market becomes more sophisticated, new, exotic
instruments are introduced, which management do not know how to assess, or that behave
in unpredictable ways.

There are two motivations for measuring the risk on a position with more precision.
Firstly, banks need to make sure that they have adequate capital set aside to cover adverse
market movements on their positions. Secondly, given that the capital base of the bank is
limited, managers want to allocate capital to trading strategies that generate comparatively
high profits whilst taking comparatively low risks. Given two competitive business units,
they cannot favour the more profitable one without also assessing the risks the unit is
taking. The introduction of more complex derivative instruments has made comparison of
profits on an equivalent basis progressively more difficult. It might be relatively straight
forward to compare the profits from trading in stocks to bond trading, but certainly not
easy to compare convertible arbitrage to exotic interest rate options.

To put this on a more mathematical basis, management introduced limits, which measured
the responsiveness of the instrument to a change in market conditions. Basis Point Value
(BPV) limits express the maximum exposure to a parallel shift in the interest rate curve of

Page 36
Chapter 2: Value at Risk Concepts

one basis point. This limit is much easier to interpret than a simple exposure based limit:
the limit is directly related to the actual gains or losses that the position might incur. Risk
managers and trading management can put in place analogous measures and limits for
other dimensions of the portfolio exposure, such as non-linear options risk (gamma),
option volatility risk (vega) and risk in other markets (share equivalents/delta). Limits can
also be applied at various maturity points for bucketed sensitivities. These dimensions of
risk exposure are a fundamental feature of the portfolio that we will refer to as risk
dimensions. The weaknesses of the BPV limit methodology are:

Market experience needed to estimate size of Cash impact is the product of BPV and a
shifts market movement - the BPV itself does
not tell you anything about the volatility
of the market and therefore the
uncertainty in the future portfolio value.
Portfolio diversity is not rewarded across risk A trader may be over-long in one risk
dimensions dimension and over-short in a correlated
dimension. The net effect may be that
the trader is trading within the risk
appetite of the portfolio, but this is not
captured by the BPV risk measures.
Table 6: Features of BPV Limits

2.6.4 Stop Losses


A stop loss limit benefits from clarity through simplicity. The bank expresses a limit in
terms of the maximum loss for which it has an appetite, given the importance of the trade
to the business it does. The trader will close the position if it accrues a loss of, say, $50,000.
The stop loss limit is still crude, as the allocation of the lim it is largely intuitive, based as it
is on market experience and appetite to do the business. Often, once a limit is breached,
losses may exceed the stop loss by quite a margin, even if the breach leads to immediate
action to unwind the position, which may not be the case in practice.

The regulatory corollary of a stop loss limit is the pre-commitment approach. The Federal
Reserve Bank of New York has recommended the pre-commitment approach for setting
aside capital against market risks. A bank states its appetite for risk in advance of trading,

Page 37
Chapter 2: Value at Risk Concepts

by stating an amount that losses will not exceed. There are strong financial incentives for
the bank never to exceed the amount.

2.7 Value at Risk


Banks ultimately need limit structures to restrict the size of the losses that the trading will
incur. It makes sense to set limits in terms of the maximum loss the bank wishes to incur,
as with the stop loss, but to relate the usage of the limit to the portfolio of outstanding
trades, as with the BPV limit.

None of the limits discussed so far takes into account the dynamics of the market. This
may be appropriate in markets where the volatility moves in fairly slow (economic) cycles,
such as the credit market. If the organisation withstood the losses arising from an exposure
of $1bn to B rated corporations last year, it is likely that it will be able to do so again this
year. This does not mean that the expected losses are static over time, only that they
change so slowly that a quarterly review of limits would be adequate. It may take several
months of recession for credit spreads to widen appreciably, notwithstanding the
occasional Barings-style default.

Interest rate volatility changes on a much faster timescale, sometimes overnight. While an
exposure to $1bn of futures contracts seemed palatable last week, this week $750m might
be the most the bank should take on. The difference between last week and this week may
be the perceived volatility of the market, i.e. the magnitude of likely moves in rates and
prices. This volatility may vary along price curves, and movements in prices may only be
partially correlated. The bank must come to some opinion of the sort of market
movements that it expects. Often this is based on historic price data, but can be modified
to respond to periods of high market volatility. In RiskMetrics, this is achieved by
weighting the most recent price movements more highly than previous movements. It may
also be possible in some risk management systems for the risk manager to overwrite the
historic volatility used for the risk measure with an estimated measure that reflects current
market sentiments.

The bank actually wants to limit the amount of loss it can incur. The Value at Risk is
defined as the maximum likely loss which will be incurred at a given level of confidence for
a given time period. It should be noted that losses greater than the VaR can be incurred.
Actual losses would be expected to exceed a 95% confidence limit overnight loss roughly

Page 38
Chapter 2: Value at Risk Concepts

one day in 20. The risk manager must bear this in mind when setting the confidence limit
for the Value at Risk treatment: a limit of 95% might be appropriate for setting exposure
limits, but a limit of 99.95% (exceeded one day in 8 years) may be more appropriate for
setting aside capital. Such internal models are generally more sophisticated than regulators’
frameworks, allowing the greatest flexibility for calculating and netting risks. The UK
regulator has adopted the Basle committee proposals to allow the use of internal risk
models for computing the capital requirements of market risk exposures (Basle committee
on Banking Supervision, 1996). The BIS stipulates 99% confidence limit, 10-day VaR, with
a multiplier of 3, for capital adequacy purposes (Basle Committee on Banking Supervision,
1996).

To be granted permission to use these internal models for regulatory reporting,


organisations must show that their models accurately predict P&L movements within
certain tolerances. They must also show that the measures sent to the regulator are integral
to the way that the organisation controls its risks. VaR suffers from credibility issues with
traders, which makes it hard to get them to agree to have their trading restricted by the
measure. In global organisations, there is often a timeliness problem with the VaR number,
which may not be delivered back to a region until the second business day after the
position snapshot to which it applies. Faster methods may be used, but these often involve
additional compromise, making it even harder to gain acceptance. Management also fear
that traders will be able to play the imperfections in the limits system, taking on risks to
achieve returns in the risk dimensions for which the modelling is weak or assumes no risk
is present.

Despite these limitations, the majority of banking organisations monitor their Value at
Risk. Many now also use the VaR number to set limits for acceptable risk. In effect, VaR
has become a capital allocation tool for the business, since business units under their VaR
limit can expand their business activities until the unused part of their limit is consumed. A
survey of high capital value multinational corporations showed that in 2001, between 15%
and 20% of the surveyed companies used VaR as a capital allocation tool (Arthur
Andersen, 2001). The significance of this figure becomes apparent from the industrial
classifications of the companies participating in the survey. Just 16 financial institutions and
ten energy companies participated out of a total 115 respondents, implying either that
more than 75% of companies that would be expected to have VaR capabilities are using it

Page 39
Chapter 2: Value at Risk Concepts

for capital allocation or, even more surprisingly, that its use as a capital allocation tool has
extended beyond the core base of financial institutions.

2.8 Benefits of Risk Management Controls


Derivatives have historically taken the blame for large losses in finance although, as noted
above, there is often a control issue that is the main cause of failure. The reason why
derivatives often feature in these situations is that they give the user a means of increasing
their leverage. For an equal amount of cash investment, they can increase the amount the
user stands to gain (or lose) from market movements. This often extends the time before
unauthorised trading activity is detected and increases the losses before positions are
unwound. We can look at the key losses outlined previously, to see where an improved risk
management framework might have prevented disaster. The table below sets out the
analysis.

Risk Management Event Type of risk Prevention

Latin American debt crisis Credit/Country risk Concentration risk


controls

Savings & Loan industry (US Interest rate derivatives Correct pricing &
building societies) lose $150bn maturity analysis of
in mismatched interest rate interest rate
exposures – many go bankrupt exposure

Stock market crash Market risk Stress testing


scenarios

Hammersmith & Fulham Interest rate derivatives Mark to market and


Council market risk
management

Proctor & Gamble lose $157m Interest rate derivatives Mark to market and
on differential swap trading market risk
management

Page 40
Chapter 2: Value at Risk Concepts

Risk Management Event Type of risk Prevention

Metallgesellschaft loses $1.3bn Commodity derivatives Stress scenarios


through an American trading
subsidiary and is bailed out by
creditors

Orange County goes bankrupt Interest rate derivatives Market risk


after the county treasurer, Bob management
Citron, leverages a $7.5 billion
portfolio and loses $1.64bn
(after accumulating $750m for
the county in previous years)

NatWest recognises losses on Interest rate derivatives Independ ence


swaption trading

Barings loses $1.3bn through Stock futures & options + Segregation of


positions taken by Nick operational (employee) duties, system
Leeson and goes bankrupt controls

Daiwa securities recognises Operational (employee) Segregation of


losses totalling $1.1bn, duties, system
accumulated by a trader at its controls
New York offices, and is
forced to close its US
operation.

Asian debt crisis Credit/country risk Concentration risk


controls

Russian debt crisis Credit/country risk Concentration risk


controls

Page 41
Chapter 2: Value at Risk Concepts

Risk Management Event Type of risk Prevention

Long Term Capital Liquidity risk, derivatives Market risk


Management loses $3.5bn of trading management, stress
investors’ money and is bailed scenarios
out by a consortium of banks

Lehman Brothers brokerage Operational (employee) System and process


fraud controls

Allied Irish Bank recognises Operational (employee) System and process


foreign exchange losses controls

Table 7: Prevention of Risk Management Events

This analysis indicates that, while many early failures were due to inadequate pricing or
hedging of derivatives, recent losses have been attributable to credit risks and control
failures. Since the risks inherent in derivatives have not gone away, we can conclude that
more sophisticated management of derivatives trading is helping to avoid derivatives-
induced losses.

2.9 VaR Approaches


Value at Risk measures can be obtained using any of three classes of approach. Each
approach has strengths and weaknesses, which are set out below, following a description of
the approach. The risk manager must choose the methodology that best suits the reference
portfolio or, preferably, run multiple methodologies and compare the outputs.

2.9.1 Variance-Covariance
The objective is to calculate the VaR quickly for a real portfolio of assets by building a
simplified model of the portfolio and, using the statistical techniques of portfolio theory, to
calculate the likely loss limit on the model portfolio, at a given confidence level, for the
time period. The model portfolio, based on a limited set of risk dimensions, has the same
approximate risk characteristics as the real portfolio. A mapping process between a real
portfolio and a model portfolio of risk dimensions generates a set of dimension weights
which, when applied to the risk dimensions, partly or wholly mimics the risk profile of the
real portfolio. For instance, a swap with five and a half years to run in the real portfolio

Page 42
Chapter 2: Value at Risk Concepts

may map to a position in the five-year and seven-year interest rates in the model portfolio.
In some cases, the limited sensitivities available in the model portfolio may not be
sufficient to retain the complete risk profile of the asset. A position in BT shares may map
to a position in the FTSE-100 in the model portfolio. This simplification retains one risk
dimension, that of the stock market that the share is listed on. However, the risk dimension
that is specific to BT is lost10. The mapping process reduces the amount of real data that
must be dealt with when calculating VaR. The objective is to reduce the problem space of
the VaR calculation to a set of well-understood calculations, for which the base data is
easily accessible. The time series of prices and rates associated with the risk dimensions are
generally observable in the market, although it is common when mapping interest rate risk
to find zero coupon rates (derived) being used in place of yields to maturity (observed),
owing to the simplicity this brings when considering interactions along the maturity time
line11. The characteristics of the particular mapping methodology represents the level of
compromise the methodology user is prepared to accept in the interests of speed or
maintainability. These decisions will be driven by materiality, e.g. for the real portfolios,
how much effect would this information have on the overall result, if we were to include
it? In the case of the BT share above, how large a proportion of our overall risk does the
equity book carry? Do the errors that the compromise introduces bias the estimate of VaR,
or do they tend to cancel out? The answers to these questions will be dependent on the
make-up of the portfolio. The methodology user will often accept a conservative VaR
measure rather than one that may underestimate the risk under some circumstances. Even
when a material risk is recognised, there may simply be no directly observable data on
which to base the model risk dimension. In such cases, the methodology user must also
find a compromise solution. In extremes, the user may just allocate a substantial VaR
reserve against an individual position. The methodology user must have a process to
review these compromises in the light of changes in trading patterns and the market in
general. For instance, in a period when the institution is active in trading IPO's12 for
technology companies, the real portfolio may temporarily carry a lot of risk that is not
captured by mapping the stocks to a local country index. This would reduce the accuracy
of the risk measure and understate the risk.

10 For more discussion of the breakdown of risk dimensions within a share price, see any discussion of the Capital Asset
Pricing Model or Arbitrage Pricing Theory, for example Elton & Gruber (1995), chapters 13 & 16.
11 Zero coupon rates can be shifted independently of each other, whereas a shift in the 6 month yield to maturity will have
a knock-on effect on later yields derived from swap or treasury prices.
12 Initial Public Offering, the first time that a company lists on a stock exchange.

Page 43
Chapter 2: Value at Risk Concepts

In the examples above, the mapping process would calculate the size of position in the five
and seven year swap rates, and the size of position in the FTSE-100, which would give an
approximate equivalent risk profile to a five and a half year swap and a BT share. In the
case of the BT share, Bloomberg, which publishes beta values for all listed stocks, has
already done the hard part of this calculation. The beta value represents the historical
responsiveness of a stock price to changes in the market index. It can be used to model the
likely reaction of a stock price to future changes in the index. The beta, in conjunction with
the current market value of BT stock in the real portfolio, determines the dimension
weighting of the model portfolio in the FTSE-100 risk dimension.

Using statistical analysis, a risk manager can compile appropriate variances and covariances,
over a chosen time period, for the risk dimensions. The statistics can be updated on a daily,
weekly, or monthly basis, with corresponding reductions in responsiveness to the market
movements. Although prices change daily, market volatilities and correlations can be
assumed to be stable in normal market conditions, and it is possible to update variances
updated monthly or even quarterly in a quiet market. The exact methods to derive the
variances and covariances can vary from basic statistics to advanced econometric
techniques, although the extra intellectual process and computational load of the latter will
not necessarily lead to a better risk measure. Variances are often calculated using ordinary
least squares regression on the time series of observed differences for a risk dimension.
Covariances are similarly estimated from the historical joint observations. The choice of
method is often driven by the need to simplify, in opposition to a typical econometric
framework, which is motivated by statistical accuracy. Whatever method is selected, the
need to calculate statistics will require the organisation to collect data for all the risk
dimensions, going back anything from 90 to 250 observations. This data will need to be
cleaned (erroneous values removed, missing values filled) and procedures put in place to
support trading into new markets that require new sensitivities to be added to the
parameter set.

Page 44
Chapter 2: Value at Risk Concepts

2.9.1.1 VaR Calculation


The mapping process derives the matrix of weights, W, that represents the model
portfolio. Each element w i is an approximation of the reference portfolio’s exposure to the
risk dimension di, taken from the set of risk dimensions D that we are using to model the
portfolio. To capture the VaR at a specific confidence level, the covariances, C, are scaled
by additional factors Z, drawn from the distribution tables of the risk dimensions. Usually,
for simplicity, the factors are drawn from the normal tables, simplifying the vector Z to a
simple scaling factor z. For instance, a confidence level of 97.5% for a loss limit13 would
represent a two standard deviation movement from the expected portfolio value at the end
of the forecast period, so z takes the value 2 in the matrix formula below for the portfolio
risk:

Value at Risk = z√ {W.C.WT} (2.1)

The main features of this methodology are14:

Accuracy Detail can be lost, both in the initial mapping process, and
in the subsequent application of covariance matrices.

Ease of computation A portfolio VaR calculation can be 1,000 times quicker than
using a more time-consuming method (Pritsker, 1997)

Data acquisition and Rather than having to collect and clean data for all traded
cleaning instruments on the portfolio, the user of this methodology
faces the simpler task of collecting data for all the chosen
risk dimensions.

Tail behaviour The method generally assumes normal distributions for


market data and for this reason is a very poor predictor of
tail behaviour.

13 A loss limit would be a one-tailed confidence limit


14 Wilson (1996) gives a useful summary of all the approaches discussed in this chapter, including extensions for non-linear
risk.. Laubsch & Ulmer (1999) also describe the three approaches, referring to variance-covariance as the Parametric
method.

Page 45
Chapter 2: Value at Risk Concepts

Non-linear exposures This method is very poor at capturing non-linear risks at the
portfolio level.

Transparency It is relatively simple for receivers of the information to look


at intermediate results (such as the matrices) and see where
their risk concentrations lie. Depending on the
implementation, it can be possible to trace back to the
trades that are contributing to a particular risk estimate.
However, it can be difficult to explain to receivers of the
information exactly what the process is doing, and therefore
its limitations, as some knowledge of modelling and
statistics is required.

Table 8: Features of Variance-covariance

2.9.2 Historical Simulation


The objective of this approach is to use historical time series as the basis of potential price
movements over the time period. Scenarios are generated using historic market data, going
back for one or more years. Differences in prices and rates are taken between two points in
the historic data, corresponding to the period for which a VaR number is required. If the
VaR is required for a two-week period, then the change is taken from two data points two
weeks apart in the historic series. The changes in historic prices are taken as absolute or
proportional changes, to be applied to the current market price. If the six month deposit
interest rate moved from 5.25 to 5.375 for a period in the historic data set, a new scenario
is either that the absolute change (5.375 – 5.25) is added to the current six month deposit
interest rate, or else the current rate is multiplied by the proportional factor (5.375 –
5.25)/5.25. If the rate today is 4.75, use of absolute shifts will imply a scenario of 4.875. A
choice of proportional shift will lead to a smaller jump, as the base rate has gone down
since the observation. Under the proportional change assumption, the scenario rate is just
4.863. The choice of absolute or proportional shifts depends on the market that is being
modelled. Often it does not make sense to take price changes from the foreign exchange
markets and apply them as absolute shifts to today’s rates, because the rate has moved so
much in the meantime. What would then have been a significant shift becomes
insignificant in the context of the current market. For instance, in June 2000 the exchange
rate between the US dollar and the South African Rand was running at about 6 Rand to the

Page 46
Chapter 2: Value at Risk Concepts

dollar. In June 2002, it was more than 10 Rand to the dollar. An assumption of
proportional scenarios would lead to a portfolio volatility, and therefore VaR, nearly twice
as great as if an absolute assumption was made. With historic simulation, the methodology
user only has to make decisions about the applicability of historic data to the expected
distribution of current market rates and prices. No assumption of normality or
lognormality is required to streamline the methodology. In the variance-covariance
methodology, the assumption of normality or lognormality, although not required,
simplifies the calculation, supporting a fast and transparent methodology. However, the
penalty for this is that tail behaviour in the model is not consistent with empirical data
from the market 15. In historic simulation, the actual distributions of the price changes are
modelled closely by the sample data. This makes historic simulation a much better
predictor of tail behaviour, which is exactly the behaviour the risk manager wishes to
model. The changes are applied to the current prices, to generate a series of possible
revaluation outcomes for the following period. The changes are applied simultaneously
across all the market rates and prices which affect the portfolio. The user implicitly
assumes that correlation between rates can be modelled by using consistently timed data.
As with parametric approaches, the user must clean the data. This job is often harder with
historic simulation, as the full range of market rates and prices is required, not just the price
series for the risk dimensions. The series must also extend over a longer period. Many of
these prices will be for illiquid instruments, which do not reprice every day. The equities
market is complex to model using historic simulation, as the price changes derived must
take account of any dividends paid, and also corporate actions, such as stock splits or new
issues. If this is not done, then the share will appear much more volatile than it should. The
methodology user must also decide what data to use for new issues, for which there is no
specific history. The challenge of historic simulation is to develop strategies for
overcoming these hurdles, without making the data maintenance onerous. The
methodology user must collect years of data to ensure the simulation is accurate, even
more so if a ten-day estimate of VaR is required, since this requires e.g. 200 observations x
10 days = 2000 days = 10 years of data.16

15 Glasserman et al. (2000b) cite many authors who have shown financial time series exhibit high kurtosis and heavy tails.
16 Overlapping observation periods cannot be used since this would bias the price change observations.

Page 47
Chapter 2: Value at Risk Concepts

P&L frequency

-30 -20 -10 0 10 20 30


P&L/$M

Figure 1: Example historic simulation results for a model portfolio of a single stock position
The portfolio is revalued under each historical scenario, of which there will be hundreds, or
possibly thousands, to model the valuation of the portfolio over the current period. The
portfolio is usually modelled in detail, using the actual trades, positions and valuation
models that a trader would use to value the portfolio 17. The simulation outcomes constitute
an estimated distribution function for the portfolio value over the period. The loss limit at
a given level of confidence is the amount above which the proportion of the portfolio
dictated by the confidence level (e.g. 95%) has been revalued. For example, 95% of the
revaluation outcomes may be above the loss ($10,380,560). The VaR at the 95%
confidence level is then $10.38M.

17 For speed, the methodology user can introduce a mapping procedure for transactions similar to the variance-covariance
approach.

Page 48
Chapter 2: Value at Risk Concepts

The main features of this approach are:

Accuracy The method can claim a high degree of accuracy, when the instruments
are revalued using original data and models, and enough simulations are
run to achieve a small standard error. However, reliance on a longer
period of historic data can cause problems, as older data is less relevant in
predicting current market behaviour.

Ease of The computation requires a lot of processing power to complete. The


computation core computation itself could take up to 250 times as long as a variance-
covariance approach, possibly more with complex market data structures,
although the avoidance of the whole mapping process will offset this.

Data The approach is very data intensive, as full history is required for all inputs
acquisition to the pricing model for every trade. This can prove problematic,
and cleaning particularly for options, for which a history of implied volatility is
required, and equity products, which require a dividend history.

Tail Historic simulation is a good predictor for tail behaviour, as the only
behaviour assumption made about the distribution of market rates and prices is that
past data is representative of potential future market conditions.

Non-linear Historic simulation is good at capturing non-linear risks, when exact


risks valuation models are used.

Transparency Whilst the end result may at first appear quite intuitive (most practitioners
are familiar with the idea of running scenarios on a portfolio), the
identification of individual hot spots in the portfolio using intermediary
data can be more problematic.

Table 9: Features of Historic Simulation

2.9.3 Monte Carlo


The objective, similar to historic simulation, is to build up a set of simulation outcomes for
each risk dimension, along with their probability of occurrence, including joint probabilities
by dimension pair. The difference is that the outcomes are drawn randomly from a

Page 49
Chapter 2: Value at Risk Concepts

distribution model for the rates and prices. Rather than generating scenarios from actual
market movements, Monte Carlo generates theoretical market movements from a statistical
model of the market data. This has the advantage that the scenarios can still be generated
using a proxy distribution if the real distribution is not known. Generally, the distribution
will be assumed, and the parameters of the distribution fitted to historic data. The quality
of the assumptions can be tested with various goodness-of-fit methods, some of which are
applied to exchange rate data in Appendix A of the RiskMetrics Technical Document
(JP Morgan, 1996). A model of correlation in the market will also be required, so that joint
distributions fit historic market behaviour. One of the great advantages of Monte Carlo is
that the choice of distribution can be made on an individual basis for each return series.
This gives the user greater flexibility in modelling market beha viour. One possible
implementation of Monte Carlo sampling is to use the Cholesky decomposition of the
covariance matrix in combination with a set of standardised random draws to obtain a set
of independent draws for the portfolio. These can be transformed to a random, correlated
sample for the risk dimensions using the transpose of the Cholesky decomposition.

The price behaviour of the portfolio is modelled in detail, as with historic simulation.
However, Monte Carlo is computer intensive, more so even than historic simulation. Ten
thousand scenarios may be required to bring the standard error of the result to a
reasonable level. This is because the pseudo-random distribution sampling across many
risk dimensions introduces discrepancies from a pure random model. Many option pricing
routines themselves use Monte Carlo, and in these cases it is usually not feasible to run
nested Monte Carlo routines, and a simpler, less accurate pricing method is used for the
option. Even with performance improvements like this, Monte Carlo consumes a great
deal of processing power for many portfolios. Many authors have proposed ways to reduce
the number of scenarios required to achieve a given standard error on the result of a
simulation.18

To reduce run time, Monte Carlo sequences are often calculated once, and then re-used in
the context of the current market models and portfolios. This has no effect on the result,
unless there is a bias in the random sequences, in which case this bias will be repeated for
every run. Often the cause of bias is systematic, and recalculating the sequences every run
will not cure the problem.

18 Glasserman et al (2000a) look at the efficiency for Monte Carlo in the context of the delta-gamma approximation for
options

Page 50
Chapter 2: Value at Risk Concepts

The scenarios generated from the random sequences are used to revalue the portfolio. As
with historic simulation, ordering the results give an estimated probability distribution for
the portfolio over the time period of interest. The VaR is taken as the centile of the
distribution corresponding to the confidence level (e.g. the 5th centile corresponds to 95%).

The main features of this approach are:

Accuracy The method is highly dependent on the skill of the calibrator in setting up
the probability distributions for all the variables. In theory, full pricing
models are used, but in practice these may be simplified in the interests of
time.

Ease of The computation requires a great deal of processing power to complete,


computation with the elapsed time up to 1,000 times greater than a variance-covariance
process on equivalent hardware. This can be reduced to 50 times by using
an approximation to the value of complex derivatives positions (Pritsker,
1997).

Data The approach requires a lot of data for calibrating the models, but is
acquisition flexible if the data is not available.
and cleaning

Tail Monte Carlo can be tuned to focus on tail events, but the correct
behaviour probability distributions need to be selected to allow the tails to be
modelled.

Non-linear Monte Carlo is not good with non-linear risks, as the overhead of
risks computational time is undesirable.

Transparency The method is not very transparent or easy to explain.

Table 10: Features of Monte Carlo

Page 51
Chapter 2: Value at Risk Concepts

2.10 VaR Inputs

Market Data
f ( x ) = se −rT

Models
Trades

Risk
Engine
Instruments

Output
Static Data

Figure 2: Risk System Structure

The inputs to the VaR process are as follows: trades, market data, static data, instrument
data. Trade and instrument data go together, but are often held separately in the
organisation. Instrument data describes the characteristics of a trade in more detail, but
only those characteristics that may apply to many trades in the portfolio. For instance, the
trade may be 100 contracts on the March 02 expiry of the ten-year US T-Note future,
bought at a price of 99 on January 18 2002. The instrument data would describe the
nominal amount of each contract, the expiry date and other expiry or delivery
characteristics, and the cheapest to deliver bond for that future. The position, which
represents an outstanding exposure to risk dimensions, is fully defined by a combination of
the trade and the instrument data. In some cases, a position may be made up of several
trades on the same instrument, and some reduction in the volume of data can be achieved
by summing the trades. For some trades, typically OTC trades in the interest rate markets,
such as swaps, there may be no instrument data associated with the trade. In these cases,
the position is fully described by the trade characteristics.

In the variance-covariance approach, these characteristics of the trade and instrument are
used to map the position to an abstracted model of the market, represented by the chosen
risk dimensions. The characteristics dictate the type of market sensitivity that the modelled

Page 52
Chapter 2: Value at Risk Concepts

version of the position should exhibit. The mapping process takes the trade and instrument
characteristics as inputs, and outputs a market parameterisation of the position. For other
approaches, the trade and instrument characteristics commonly serve as input to the main
VaR calculation engine, although all approaches can use a form of the mapping process to
achieve performance improvements.

In some cases, particularly options, a model of the position’s risk characteristics, i.e. the
greeks, is required to map the position to risk dimensions. The model may be in the public
domain (the most well-known example being Black Scholes) or it may be proprietary to the
institution. The instrument and trade characteristics are inputs to the model, as is market
data (either estimated or observed). The output is an input to the mapping process. For
simulation approaches, the model is generally required for all instruments. The trade and
instrument characteristics input to the models, along with simulated market data. The
output is a valuation of the instrument under the simulation.

Market data is collected periodically, to assist in the process of valuing the position. This
market data may be an actual price, for example the close of exchange price for the futures
contract above. Or, it may be a series of interest rate yields, used to obtain a theoretical
value for the position. For some illiquid derivative positions, mark-to-market prices may be
obtained by calling a series of brokers and asking for a bid for the position. In these cases,
great care must be taken in ensuring that the price is valid for revaluation, perhaps by
comparing with a theoretical benchmark obtained from a pricing model.

In variance-covariance approaches, market data will affect the output of any modelled
mappings. It will also be used to construct the variances and covariances of market
parameters, which are used to calculate the overall Value at Risk of the portfolio. For
simulation approaches, the market data is used to generate scenarios, either for historic
simulation or Monte Carlo simulation.

2.11 VaR Outputs


Outputs of a VaR calculation will depend on the implementation. At a minimum, the
calculation must provide the VaR number itself, but additional information is often
provided to help interpret the number. Large changes in VaR numbers are often
explainable either as unusual trading patterns, or as mistakes in the input data. It is

Page 53
Chapter 2: Value at Risk Concepts

important to have the necessary information to perform the analysis that leads to one of
these conclusions.

The variance-covariance approach will automatically contain a great deal of useful


intermediary data. The dimension weight matrices will contain a wealth of information
about apparent VaR hotspots. By comparing the matrices to those for the previous day, the
risk manager can see where there have been large jumps in the dimension weights and thus
the sensitivity to market movements. This information will be fairly granular, for example
the currency of the hot spot will almost certainly be revealed, and for interest rate hot spots
the term of the sensitivity will also be revealed. The methodology user may have access to
more detailed breakdowns that show the dimension weights by book, or for the trades
within a book. All of this information helps to find the source of VaR changes that are
attributable to unusual trading patterns or deal input/instrument set up errors. The
covariance matrix, meanwhile, will allow the metho dology user to investigate possible
rogue data within the market parameters. As with the dimension weight matrices, a
comparison to the previous day will highlight any big changes. This will reveal exactly
which time series is responsible for a given change. It may be that there is an error in the
data that was collected for the previous day, or it may just be that the methodology used to
calculate the variance is sensitive to large market movements on a particular day, entering
or leaving the data set. These sorts of ghosting features are characteristic of some of the
simpler approaches to calculating realised volatilities for risk dimensions.

The historic simulation and Monte Carlo approaches will not automatically contain such a
wealth of information, since there is no intermediate aggregation required. The standard
output is the scenario results, ranked by the outcome. However, since the results are
additive at the trade level, it does not cost anything to aggregate these results by book or
currency, and ultimately by trade. These results must be compared to the previous day’s
results to facilitate problem investigation. Problems in market data are slightly less
tractable, with only a review of the most recent data collection throwing any light on
potential problems.

A useful output for management is a split of the VaR into components attributable to the
portfolio’s position in different types of risk, for instance interest rate, FX and equity, plus
the implied volatility of all of these. The variance-covariance approach can analyse VaR by
different categories, but it is not easily extended to volatilities, since these should not be

Page 54
Chapter 2: Value at Risk Concepts

correlated with the underlying market movements19. The simulation approaches would
require separate, time-consuming runs to split the risk in this way, although they would be
able to correctly calculate the component of risk attributable to volatility.

2.12 Other Risk Measures


The RiskMetrics group highlights other measures of VaR in their document Return to
RiskMetrics: Evolution of a Standard (Mina & Yi Xiao, 2001). These are described in the
following table:

Calculation Usage

Marginal VaR VaR(reference portfolio plus marginal Limit checking


position) – VaR(reference portfolio)20
Is the trade having the
intended effect on the risk
profile of the portfolio?

Incremental Marginal VaR for a unit position change in Trade construction


VaR a risk dimension

Expected Conditional expected loss, predicated on a Comparison of tail


Shortfall21 loss greater than the VaR Vα, at a behaviour in portfolios
confidence level α, for a portfolio of value
V

expected shortfall at confidence level


α is E (− ∆V | ∆V < Vα )

Table 11. Other Risk Measures

19 If the underlying price moves sharply, up or down, volatility will go up. If the underlying remains constant within a
defined limits, the volatility may go down. This is not a relationship that is easily captured using ordinary least squares
correlation analysis of linear dependent variables.
20 Garman (1996) develops the concept of DelVaR and normalisation to arrive at a linear approximation to Marginal and
Incremental VaR.
21 also known as Tail VaR.

Page 55
Chapter 2: Value at Risk Concepts

2.13 Coherency
Artzner et al (1999) have defined a set of axioms that a risk measure must satisfy in order to
be classified, in their terminology, as coherent. The axioms are as follows:

1. The measure is translation invariant. This requires that the risk must be reduced
exactly by the amount of any cash added to the portfolio.

2. The measure is sub -additive. The measure for a portfolio must be less than or
equal to the sum of the measures for individual sub -portfolios.

3. The measure displays positive homogeneity of degree 1. In short, if we double the


positions, we double the risk.

4. The measure is monotonic. If the loss in one portfolio is worse than another for all
outcomes, then the risk is greater.

This paper generated a lot of interest, as the authors argued that VaR is not a coherent risk
measure. The authors show that it is possible to construct a portfolio of digital options for
which the VaR is not sub-additive. This goes against intuition, as portfolio diversity and
hedging are both drivers of sub -additive behaviour in the VaR measure. However, the
effect may be understood by considering a portfolio of n risky assets, each with a small
probability p, less than the chosen confidence level pα, of incurring an economically
significant loss. These assets are otherwise not sensitive to movements in market rates.
When n takes the value 1, the portfolio VaR is zero at the chosen confidence level.
However, as n increases, the probability that there will be at least one economically
significant loss becomes

Pn = (1 - (1 – p)n) (2.2)

Given the value of p, we can solve for the minimum value of n that will give a non-zero
VaR measure at confidence level p α.

pn ≥ pα (2.3)

(1 − p) n ≤ 1 − pα (2.4)

Page 56
Chapter 2: Value at Risk Concepts

log(1 − pα )
n≥ (2.5)
log(1 − p )

For integer values of n that satisfy this inequality, the risk of the whole portfolio, as
measured with VaR, is non-zero, and thus greater than the sum of the risks of its
components, which are all zero. Artzner et al propose the tail VaR measure described
above as a coherent risk measure that will maintain sub -additivity under these conditions.
This research focuses on the standard VaR measure in preference to tail VaR, since it is
more widely known and used.

2.14 When to use which method


The parametric approach is fast, robust and not too data intensive. However, detail can be
lost, the effect of correlation matrices is often not well understood, and the method is
known to be weak for non-linear risks such as options. It often works less well for higher
levels of confidence (99%+) for which the market assumptions embedded in the method
(normal distribution of market parameters, joint normal co-distributions) do not work well.
This method is best used for portfolios which do not have significant options risk, where a
high level number is required by senior management, within a short time after a periodic
cut-off.

Historic simulation is data intensive, slow, but more accurate than parametric methods,
especially at high levels of confidence. Data is often hard to find for option based
products. The method works well in normal and non-normal markets, and for non-linear
instruments, provided that there is sufficient data available.

Monte Carlo simulation is also data intensive and slow. It will cater for both normal and
non-normal markets. It does cater for non-linear instruments in the portfolio, however if
present in volume they make the method unacceptably slow.

2.15 Back testing


The regulator requires that VaR processes are backtested against real results. What this
means is that the loss predicted by the model, and the loss that would have been incurred if
the portfolio had remained the same over the period, are compared. The actual loss is not
expected to go over the predicted loss more than the number of times predicted by the
confidence level (e.g. 5 times in 100 at the 95% confidence level). The actual number of

Page 57
Chapter 2: Value at Risk Concepts

excesses is subject to a standard error. A significance test is therefore required, for example
to determine if a result of seven excesses is statistically different to the five expected above.

If the number of excesses is significantly different (over or under), the methodology user
must go back to the VaR process and try to understand where the problem is. The process
consists of the data collection and representation, mapping process and models. Once
again, the methodology user will have to analyse sub-sections of the portfolio, split by
book, instrument or currency, to seek out forensic evidence of a VaR process failure.

The regulator looks to the results of the backtesting when deciding whether or not to allow
internal modelling of market risk for regulatory reporting, what multiplier of the VaR
should be used when calculating risk for capital adequacy purposes, and whether or not the
internal management processes of the bank are generally effective. The standard charge for
internally modelled VaR is three times the average daily VaR for preceding 60 days, or, if
higher, the previous day's VaR. If backtesting results are poor (more than ten exceptions
over 250 days), the charge can rise to four times the average daily VaR.

2.16 Risk Metrics


The best known VaR approach in the public domain is RiskMetrics, launched in 1994 by
JP Morgan, and currently overseen by the RiskMetrics Group (RMG), a joint venture
between JP Morgan and Reuters. The method started as an example of the variance-
covariance approach, but now covers all the main variance-covariance and simulation
approaches. Its success has been partly attributable to the supply of data in tandem with
the methodology.

Other VaR approaches in the public domain are largely those implemented in software:
RMG, Infinity, Summit and many other software vendors supply variance-covariance,
historical simulation and Monte-Carlo VaR software. However, none has achieved the
reputation as an industry standard that RiskMetrics has, so this research will focus primarily
on RiskMetrics.

2.17 What’s wrong with VaR


Many financial institutions started to calculate VaR because they wanted to demonstrate to
the regulator that they had sufficient knowledge of their positions, risk dimensions, market
data and models to do so. It also helps senior managers to snapshot the risk of the

Page 58
Chapter 2: Value at Risk Concepts

organisation. More recently, institutions have taken to discussing the risk profile in their
annual reports, to the extent of including VaR numbers in the report. However, without
detailed knowledge of the portfolios, models and assumptions embedded within the
methodology, this disclosure has limited value for the investor. At best, they may have a
high level view of the portfolio risk. Assumptions in the detail of the methodology could
give results that are misleading for senior management and investors. For instance, it would
be unusual for a VaR process to record a holding in a fund as a basket of the current fund
components. Banks find this information difficult to manage, especially since the fund
manager will not inform them of the fund holdings at a position level. The risk manager
can only track how the key holdings in the fund have changed. The extreme example of
this is a hedge fund, which has a complex risk profile, equivalent to an entire proprietary
trading book. The risk is impossible to measure without access to the fund’s position data.
Faced with this problem, and common systems constraints, risk managers can do nothing
more than record the fund using equity or index as a proxy, and reserve against non-equity
like behaviour22. Depending on the reserving method, the risk measure for this position
may look much less complex than it is.

One of the goals of the risk management industry is to develop a risk measure that can be
used as the basis of capital allocation decisions. However, VaR processes are generally
flawed to one degree or another, to the extent that perhaps they should not be used for
macro capital allocation purposes. Instead, VaR limits may provide more benefit set at desk
level, with an adjustment to take account of any shortcomings in the calculation for that
trading activity. Capital allocation might be better if more closely aligned with regulatory
requirements, since this at least has some form of economic meaning to the business, i.e.
this is capital that must be set aside while the business continues.

VaR may be giving a false sense of security. A common mistake is to interpret the VaR as
the maximum loss that can occur on any reasonable timescale. However, this depends on
the confidence level and time period chosen. VaR is a maximum likely loss, and at the 95%
confidence level, we expect it to be exceeded anything up to five times in 100 – once a
month. Three times in ten years we should expect it to be half as much again, even if we
assume normality. Because of fat tail behaviour in the markets, we can actually expect it to
be much more frequent and much higher than that. The regulator recognises this when

22 You could argue that an equity price is itself a proxy for the value of the complex set of assets, liabilities and future
transactions that makes up a company, making it more opaque a risk than the hedge fund.

Page 59
Chapter 2: Value at Risk Concepts

setting the capital charge for market exposures at a minimum of three times the 99%, 10-
day VaR.

VaR is a daily management number. Many of the assumptions within VaR amount to
assuming that the markets today will continue on more or less as they have done for the
last two years or more. The exposure of a financial institution may in fact lie in situations
that are not normally observed in the market, similar to the ERM crises of the 90's, or the
instantaneous widening of credit spreads following the collapse of Barings. Stress tests,
which look at the effect on P&L of specific scenarios in the market, may provide more
insight for management. Although this method cannot account at a portfolio level for data
representation issues like the external fund case above, such positions can be examined in
detail and stress tested individually on a less frequent basis, such as once a month.

Ano ther issue relates to specific timing of trading activity and VaR. VaR tends to look at
losses over a minimum period of a day, from closing price one day to closing price the
next. This does not measure the possible impact of intraday losses. Price movements of
less than two standard deviations may well trigger stop loss limits, leading perhaps to
positions being closed out before the loss predicted by VaR is reached. Price movements
of greater than two standard deviations may well occur during normal trading volatility,
whereas the close-to-close price differentials are more modest.

Many practitioners argue that the best way to control the risks is to know the trading
strategies in the portfolio, understand what market dynamic they are trading off, and ask
these questions:

- what is the purpose of the trade?

- what is the likely investment period before the position is exited?

- what assumptions are being made: models, data, markets?

- what is the impact if one of these assumptions turns out not to be true?

- what is the worst thing that can happen to the position?

- how likely is this?

Page 60
Chapter 2: Value at Risk Concepts

- what stop loss limits are set on the position?

- if stop loss limits are triggered, will the position be liquidated?

- can the position be liquidated immediately, or will it take some time?

- what might be the impact of this delay in terms of further losses?

- how does the position affect the overall risk in the portfolio, taking into account
concentrations and correlations?

2.18 Conclusion
VaR has evolved from the traditional risk management discipline of limit management. It is
an extension of the BPV or basis point value limit, adding knowledge of likely market
movements to the profile of the portfolio risk sensitivity that a BPV provides. There are
three main approaches available to today’s risk manager: variance-covariance, historic
simulation, and Monte Carlo. Each of these approaches has strengths and weaknesses.
Variance-covariance is fast, but does not work well for non-linear instruments and non-
normal markets. Historical simulation is slower, but works well in non-normal markets.
The advantage over variance-covariance is particularly strong if the risk manager is
interested in the tails of the portfolio distribution, e.g. the 99% confidence limit of loss.
Monte Carlo is slowest of the three, and will also work well in non-normal markets. A
common optimisation of Monte Carlo is to approximate the payout of the portfolio in
some way, for instance using a delta-gamma pricing framework. Such optimisations make
the method more feasible, but less suitable for portfolios with large numbers of non-linear
assets.

The VaR number should be treated with caution. Careful backtesting will highlight
weaknesses in the approach, but a responsible risk manager will still talk to the traders to
understand the risks behind the VaR numbers. As we should expect in a constantly
changing financial environment, VaR is not the ultimate risk management goal, but it is a
useful tool for modern risk managers.

Page 61
3 THE RISKMETRICS METHODOLOGY

3.1 Introduction
Value at Risk (VaR), a one-tailed confidence limit for an institution's expected P&L over a
given trade horizon, has become an essential tool for the risk manager. Throughout the
mid to late nineties, banks around the world have implemented Value at Risk systems to
help them manage their risks. The measures have been aimed primarily at senior
management who, following the G30 report into industry best practice for trading
derivatives, desire a risk measure that can be applied evenly across different aspects of the
institution's business. The regulators have wanted to see that senior managers have access
to this type of information and use it to control the risk exposure of the institution. In the
early implementations, the most common approach was variance-covariance, a parametric
approach which derives a matrix of linear scaling factors for a set of risk dimensions for a
portfolio, and then calculates covariance parameters for these risk sensitivities to derive a
confidence limit on the P&L. Risk managers favoured the approach for its simplicity and
speed of execution. Each bespoke implementation was different, but from September 1994
there was a benchmark for risk managers to measure their systems against: RiskMetrics.

RiskMetrics is probably the most important development in the field of value at risk
research since the G30 report into market practice of derivatives trading. No methodology
has come close in terms of visibility, impact and market acceptance. It is the most complete
treatment of market value at risk reporting in the public domain, covering methodologies
for both calculating risk and for calculating the parameters necessary to drive the
methodology. The vanilla methodology uses the computationally less demanding approach
of variance-covariance, making it easy to implement. It comes with data, so that the user
does not have to spend time collecting and cleaning their own. Finally, it has a great
pedigree going back to JP Morgan, now JPMorganChase, one of the largest wholesale
banks in the world, and Reuters, the largest information distributor.

This chapter describes how the methodology came to be published in 1994. It gives an
overview of the methodology, allowing the reader to understand the basic process that
leads to a Value at Risk number. The role of the RiskMetrics data is shown in the variance-
covariance calculation. The mapping of cashflows to risk dimensions, which leads to the
risk weights for the portfolio, is reviewed by asset class. Central assumptions are analysed
within the context of different asset class and markets, to determine if they are generally
Chapter 3: The RiskMetrics Methodology

true for all asset classes and market conditions. A variation of the methodology, used to
capture non-linear risks, is outlined. This will be the subject of detailed assessment in a later
chapter. Omissions in the methodology, which may be encountered when implementing it
on a real life portfolio, are described in detail, and ways around these omissions are
discussed. Finally, the methodology is put in the context of a typical bank's proprietary
trading books, and the strengths and weaknesses re-examined, by looking at the parts of
the portfolio's risk profile that would not be captured by a RiskMetrics treatment.

The RiskMetrics Group has rightly stressed in recent documentation (Mina & Yi Xiao,
2001) that RiskMetrics is a toolset of complimentary methodologies, combined with a
unifying data set that can be used with all the tools. It is unfair to cite weaknesses in one of
these tools as a weakness of the RiskMetrics product as a whole. A prime example of this is
non-linear risk under the original RiskMetrics variance-covariance methodology. Like all
variance-covariance approaches, RiskMetrics was weak at assessing the risk on options
positions, making it inappropriate for large sections of the financial marketplace. However,
the RiskMetrics user can now get a linear profile of their portfolio using the variance-
covariance approach, then use Monte-Carlo analysis of subsets of the same data to
establish more complex risk profiles and potential hot spots.

Despite this weakness with the variance-covariance implementation of RiskMetrics, it was


the predominant usage of RiskMetrics during the important early years of VaR adoption.
This research will investigate whether the provisions made for option risk at this time were
adequate, and indeed may still be appropriate for a fast business snapshot.

3.2 RiskMetrics in Summary


RiskMetrics started as a variance-covariance approach to value at risk measurement. It is
the variance-covariance approach for which RiskMetrics is best known. The original
methodology maps the portfolio to a set of fundamental risk dimensions, which captures
the risk profile of the portfolio, while reducing the scale of the processing required to
estimate the change in value of the portfolio for a simulated market scenario. These
dimensions can be thought of as a set of atomic components for the financial markets.
These fundamental market sensitivities are not expected to be independent of one another,
and are more likely to relate to liquid rates and prices that can be observed in the market23.

23 In the interest rate markets, interest rate sensitive cashflows are usually mapped to one of a number of market
maturities, corresponding to the rates quoted in the market. Alternatively, a principal components approach would

Page 63
Chapter 3: The RiskMetrics Methodology

The mapping process, between the reference portfolio positions and the set of risk
dimensions, determines a set of weights for the risk dimensions. These weights dictate the
risk profile of the model portfolio. An estimate of the variance of the reference portfolio,
and thus the value at risk of the reference portfolio, is derived from the weights and the
covariances between the risk dimensions. The central calculation determines Value at Risk
at 95% confidence, as 1.65 times the standard deviation of the model portfolio.

3.3 History of the methodology


JP Morgan based the RiskMetrics methodology on internal risk calculations, and placed it
in the public domain in October 1994. More recent accounts have claimed that the original
methodology was the JP Morgan methodology (Mina & Yi Xiao, 2001). Market data
required in the methodology is still provided free of charge on the internet.24 This has been
one of the keys to the methodology's success. The most challenging aspect of a Value at
Risk implementation is to put in place an operational structure to assemble and clean the
data required to support the implementation. By taking this burden away from the risk
manager, RiskMetrics became a very attractive option for risk management reporting,
provided that the risk manager could live with any limitations in the data set. JP Morgan
initially sold an Excel add-in, FourFifteen, which would calculate RiskMetrics VaR values
for a reference portfolio input by the user. The RiskMetrics Group is now marketing Risk
Manager, a software application that replaces the spreadsheet add-in. Most risk
management software suppliers, such as Infinity and Summit, also developed support for
RiskMetrics. This gives the methodology a practical advantage over theoretical papers that
may have had more technical merit. The risk manager adopting RiskMetrics has only to
worry about how to get a large portfolio updated into the chosen implementation on a
daily basis. The methodology is the best known approach for measuring risk in the public
domain, and as such has become a boardroom de facto standard. RiskMetrics boasts more
than 5,000 users of its products, although there is no indication of the identity of specific
clients.

One of the early strengths of the methodology is its roots in the practical experience of risk
managers in JP Morgan in the early nineties. The methodology is a compromise between
theory and pragmatism. This means that implementation issues, such as data collection,

derive abstract concepts such as parallel shift, gradient change and rotation that explain the changes in the observed
market rates, and map the cashflows to the principal components.
24 The RiskMetrics Group has introduced a delay of six months in updating this data. Up to date data must be bought.

Page 64
Chapter 3: The RiskMetrics Methodology

have been thought through and resolved for the methodology user. The user will not set
down a path of implementation, only to find that an essential part of the methodology has
been deferred for later study. More recently, the methodology has strengthened through
experience, implementing the methodology into a wider base of institutions with the
FourFifteen and Risk Manager software packages. The RiskMetrics Group has enhanced
the software according to customer demands. This implicit extension of the methodology
has now been formalized in the document "RiskMetrics: Evolution of a standard" (Mina &
Yi Xiao, 2001).

3.4 JP Morgan
JP Morgan was a US investment bank, offering a wide range of financial services to
financial institutions and wealthy individuals. It was part of the Morgan Guaranty Trust
Company. In September 2000, the bank announced a merger with Chase Manhattan to
form J. P. Morgan Chase & Co. The principal subsidiaries remained. In November 2001,
the Morgan Guaranty Trust Company of New York and The Chase Manhattan Bank were
merged into a new entity, JPMorgan Chase Bank.

3.5 Reuters
Reuters is the largest information supplier in the world. Trading rooms around the world
use Reuters software to feed market data on to the computer screens. They also have a
large news operation and a trading and risk management software business. They were an
obvious choice to whom JP Morgan could delegate data cleaning and publishing. They
recognised that this gave them great potential for downstream revenues, which they have
begun to realise as the free data service on the internet has become less current.

3.6 Context and development


At the time that JP Morgan launched RiskMetrics version 2 into the public domain, many
institutions were coming to grips with the need for better risk management, following the
publication of the G30 review of derivatives trading (G30, 1993). JP Morgan published a
variant of procedures that were common practice among the larger or more advanced
banks. Even some lower ranking banks had started implementation of similar variance-
covariance methodologies. Details of these other implementations varied from bank to
bank, but JP Morgan established a benchmark framework, within which other institutions
could develop customised risk reporting for their own exposures. This customisation

Page 65
Chapter 3: The RiskMetrics Methodology

process has fed back into the RiskMetrics methodology, as weaknesses are addressed in
subsequent versions of the methodology or editions of the RiskMetrics Quarterly
publication. Chief developments have been:

Technical document v3 1995 Inclusion of commodities, widening of dataset to include


other currencies
Technical document v4 1996 Non-linear instrument coverage
[TD4]
Evolution of a standard 2001 Unification of tools; change to map ping; pricing from
spread curves;
Table 12: Development of RiskMetrics

3.7 The Heart of RiskMetrics


In its variance-covariance form, the RiskMetrics transform for a reference portfolio is the
generic variance-covariance form of (2.1), i.e.:

Value at Risk = z√ {W.C.WT} (3.1)

which they write as:

VaR = √ (VCVT) (3.2)

In 3.2, C is the correlation matrix for the dimensions to which the reference portfolio is
mapped. The RiskMetrics Group provides this matrix, for a fixed set of risk dimensions.
The matrix is calculated using the return series of the risk dimensions. The vector V is the
set of weights for the risk dimensions, which represents the model portfolio, multiplied by
the dimension VaR. The vector is formed from a scalar multiplication of the elements, wi,
of the matrix W in 3.1 and the corresponding elements zσi of a second matrix Σ , supplied
by RMG. Σ contains the square roots of the diagonal elements of the covariance matrix C
from equation 3.1, multiplied by the quantile factor z that represents the desired
confidence limit.

 w1 zσ 1 
 
 w2 zσ 2 
V =  w3 zσ 3  (3.3)
 
 M 
 w zσ 
 n n

Page 66
Chapter 3: The RiskMetrics Methodology

Each element Vi of the vector V represents the exposure, w i, to a risk dimension D i in the
model portfolio, multiplied by the RMG one day 95% VaR for the dimension, zσ i .

3.8 Time horizon


For linear assets over short periods of time, the methodology assumes that price changes
do not contain serial correlations. This means that each successive movement has not been
influenced by any of the movements preceding it. For instance, if a rate or price goes up in
an observation period, it is equally likely to go up or down in the next observation period.
In this case, the standard deviation corresponding to t observation periods, each with
standard deviation σ, is simply σ√t. The RiskMetrics transform can then be expressed as a
Value at Risk (VaR) for a specified time horizon t as follows:

VaR = √ t(√ VCV T) (3.4)

Time horizons of one day and ten days are both common. One day is a practical horizon
from the point of view of data collection, and is perfectly adequate for liquid markets and
normal market conditions, in which positions can be exited quickly. At a 95% confidence
level, a one-day VaR is a useful indicator for a risk manager looking to manage the risk
profile of the reference portfolio from day to day. For a measure that better reflects the
maximum likely loss of the firm, a ten day time horizon and higher confidence level are
appropriate, and indeed necessary if seeking regulatory approval for the VaR process. For
this situation, although the regulator does allow scaling with (3.4)25, RiskMetrics provides a
ten-day data set that may be used directly in (3.3) and (3.2).

3.9 RiskMetrics standard deviations


RiskMetrics makes four assumptions about the risk dimensions of the model portfolio:

1) standardised logarithmic returns on financial prices are distributed according to


the univariate normal distribution;

2) standardised logarithmic returns on pairs of financial prices are distributed


according to the multivariate normal distribution;

3) Returns are independent across time;

25 In this case, the regulator must be satisfied that option risk is not understated by this approximation (FSA, 1999, ch TV)

Page 67
Chapter 3: The RiskMetrics Methodology

4) The expected logarithmic return on a financial asset is zero26.

These assumptions are fundamental to the framework that RiskMetrics uses to calculate
risk. The single period logarithmic return, rx,t, of a risk dimension x, for the period t, is
expressed in terms of the prices px,t and px,t-1 as follows:

rx,t=log(p x,t)-log(p x,t-1) (3.5)

The logarithmic return is assumed to follow a normal process with mean zero and time-
varying volatility σ x,t

rx,t = σx,tε, ε∼Ν(0,1) (3.6)

Logarithmic returns are thus assumed to be univariate conditional normal (conditional on


time).

The estimate of the standard deviation is calculated using exponentially weighted


observations of the returns. At time t, we may have a set of n historical returns Xt, Xt-
,....,Xt-j, .....,Xt-n. When calculating the variance estimator, the square of the jth return is
1

given a weight λj, 0 < λ < 1. This will give more weighting to the most recent observations,
with the weighting gradually declining as the observations get further away in time. The
formulation for the variance estimator at time t, σˆ t2 , is as follows:

(1 - λ ) n
σˆ t2 =
1 − λn +1 Σ λ .X
j =0
j 2
t -j (3.7)

where the mean of the returns has been assumed to be zero. This can be stated recursively
as:

σˆ t2 = (1 - λ )X 2t + λ σˆt2−1 (3.8)

RiskMetrics uses 550 daily returns to produce the 1-day correlations and 1-day volatilities
that they provide. However, the choice of the exponential decay parameter λ means that
beyond the first 75 observations, the sum of all the weights is less than 1%, i.e.

26 This last assumption is intended to remove the possibility that any estimate of the mean based on time series data may
be largely inaccurate.

Page 68
Chapter 3: The RiskMetrics Methodology

Σ
j =l +1
λ j < 0.01 , l=75 (3.9)

The decay factor λ is chosen by minimising root mean square error (RMSE) between
estimated daily variance and subsequent observation of the daily squared return27. Original
research by JP Morgan showed that λ usually took a value in the interval [0.9, 1]. For
simplicity, they chose a single value of λ, a weighted average of the optimal λ across 480
time series, and this value is still used today. The value used is 0.94 for daily data sets, or
0.97 for monthly data sets.

3.10 RiskMetrics Covariances and Correlations


To calculate correlations, RiskMetrics estimates covariances using exponential weighting.
The covariance estimate σˆ xy2 ,t at time t, between two market sensitivities x,y, with return

series Xt, Xt-1,....,Xt-j, .....,Xt-n. and Yt, Y t-1,....,Yt-j, .....,Y t-n., is similar to the variance estimate:

(1 - λ ) n
σˆ = Σλ
2 j
.X Y t- j (3.10)
1 − λ n +1
xy , t t- j
j= 0

where the same value of lambda has been used.

The RiskMetrics correlations are then given by

σˆ 2xy,t
ρˆ xy,t = (3.11)
σˆ 2x,t σˆ 2y,t

3.11 Data mapping


The vector V of weighted risk dimensions represents the model portfolio. It is derived by
mapping portfolio exposures to the risk dimensions in the RiskMetrics model. The
approach to mapping depends on the asset class of the portfolio exposure. The objective is
to create a synthetic position in the portfolio exposure, using only exposures in the Risk
Metrics risk dimensions. The synthetic position must have risk characteristics the same as
or similar to the original. This does not necessarily lead to a unique mapping, particularly
when mapping cash flows along the maturity points of the yield curve, when any number

27 [TD4: section 5.3.2].

Page 69
Chapter 3: The RiskMetrics Methodology

of combinations of cashflows may be used at the maturity points to provide fixed or


floating risk exposure at a given date. The RiskMetrics methodology adds a constraint to
set the present value of the cashflows of the model portfolio to be the same as the
reference portfolio. This leads to a unique solution for cash mapping.

3.11.1 Fixed Income


The Fixed Income class of financial instruments covers interest rate products on which
regular coupon payments are received, such as interest rate swaps and bonds. To reduce
the complexity of the calculations, the risk dimensions in interest rate markets are restricted
to 14 maturity points, or vertices. The actual portfolio cashflows are mapped to these
vertices plus cash holdings. The method for mapping a cashflow which falls between two
vertices is subject to the following restrictions:

1) The net present value (npv) of the cashflows on the two vertices must be the same as
the npv of the original cashflow. The value of the synthetic position is thus the same as the
actual exposure.

2) The exposure of the synthetic position to changes in the interest rates at the vertices is
the same as the actual exposure.

3) The zero coupon interest rate on the original cashflow date is a linear combination of
the zero coupon interest rates on the two vertices.

z t = αz L + (1 − α ) z R (3.12)

These conditions lead to the following equations for the present values of the weights:

t
WL = α Vt (3.13)
tL

t
W R = (1 − α ) Vt (3.14)
tR

(t − t L )(t R − t )
WC = − Vt (3.15)
tRt L

Page 70
Chapter 3: The RiskMetrics Methodology

The cash holding bears no interest rate risk, but is required to maintain the sensitivity to
foreign exchange risk.

The issuer-specific element of the risk, which is independent of the yield curve, is ignored
in RiskMetrics. Cashflows can be priced from the corporate curves, but the corporate
curves are not included as risk dimensions. The only representation of issuer risk is a
spread risk between swap curves and government bonds, captured as the correlation
between these two price series.

3.11.2 Foreign Exchange


Foreign exchange instruments include spot and forward FX deals, FX swaps, cross-
currency interest rate swaps and FX options. They are mapped as their underlying
cashflows for interest rate risk. These cashflows are discounted to the spot date to
represent the spot exchange rate risk. Some other instruments will also have implicit
foreign exchange deals built into them – for instance a cross-currency convertible, which
converts from a bond in one currency (often dollars) to shares in another currency, at a
predefined price. In fact, all financial instruments will carry foreign exchange risk, with the
exception of those that have cashflows only in the base reporting currency. This is the
currency in which risk is quoted, usually the domestic currency of the ultimate domicile of
the institution, although it can be any currency, and a choice of dollars is also common for
institutions domiciled outside the US. The RiskMetrics data set supports only 21
currencies. Some currencies not covered are pegged to the dollar, so would be mapped to
the dollar risk dimension, if the base reporting currency is not itself dollars. Euro
denominations such as the Deutschemark have been subsumed into the Euro market data.

3.11.3 Equity
Equity instruments include shares, share options and share indices. Other instruments,
such as convertible bonds, also carry equity risk. The RiskMetrics methodology assumes
that the issuer-specific element of equity risk is negligible. This is equivalent to stating that
the reference portfolio is well-diversified with respect to the index. Equity positions are
mapped to one of 32 country indices, using the equity beta value. The beta value represents
the responsiveness of the individual equity to movement in the respective index. It is a
feature of the Capital Asset Pricing Model of equities. If the equity price series is denoted
x={xi} and the corresponding index price series is denoted m={m i}, with i finite, the beta

Page 71
Chapter 3: The RiskMetrics Methodology

for the price series is calculated as the ratio of the covariance σ xm


2
,t and the index variance

σ m2 ,t :

σ xm
2

βx =
,t
(3.16)
σ m2 ,t

although in practice these values are widely available through data providers such as
Bloomberg. This can be written equivalently in terms of the correlation and the volatilities:

σ x, t
β x = ρ xm, t (3.17)
σ m,t

The beta can thus be thought of as the correlation between the stock and the market,
scaled by the relative volatility of the stock. A riskless asset has a correlation with the
market of zero and therefore a beta of zero . The market index itself has a beta of one.
Stocks may have a beta greater than or less than one, and they can even be negative if the
correlation is negative, although this is rare in empirical data. The beta is used as an
additional weighting when mapping a position to an index. For instance, a position of
150,000 shares in the UK market with a current price of £4 and a beta of 1.2, when
mapped to the FTSE-100, will become a weighted notional risk of

(150,000 x £4) x 1.2 = £720,000 (3.18)

3.11.4 Commodities
Commodities include base metals such as gold, silver and copper, energy contracts for gas
or oil, and agricultural output such as orange juice, cotton and of course pork bellies. All
commodities are represented as standardised commodities futures contracts. Contracts
falling between the standard vertices are mapped analogously to interest rate cash flows.
However, the number of vertices is far smaller, with only 4 points used for a given
commodity.

3.12 Inclusion of non-linear instruments


Non-linear portfolios, for example with significant option positions, can be modelled in
one of three ways. For reasons of simplicity, the option can be represented as the delta-
weighted underlying. This will capture the linear component of the option’s sensitivity to

Page 72
Chapter 3: The RiskMetrics Methodology

price changes in the underlying at the moment that the delta has been calculated. The delta-
weighted underlying is then mapped using the applicable method for the underlying
financial instrument of the option. Sensitivities of second or higher order are ignored using
this method. Exposure to changes in implied volatility is also ignored. For cases in which
these approximations are too crude, the covariance flavour of RiskMetrics also provides
for the entire portfolio return to be modelled using its moments. This second method is
described in detail in chapter 5.

The third method for modelling options in a portfolio is partial or full simulation. Full
simulation requires the option to be priced exactly for a series of market shifts, derived
from historic data. Partial simulation requires that the option be priced approximately,
using its delta and gamma. In both cases, the resultant portfolio loss distribution gives the
95% one-tailed loss directly. For a larger portfolio containing options, this loss must be
somehow merged back in to the portfolio VaR calculation, or else the simulation must be
run for the whole portfolio.

3.13 Limitations
The limitations of RiskMetrics fall into two categories:

1) limitations owing to assumptions of the methodology, which may break down


under certain circumstances;

2) limitations owing to features of the methodology framework that fail to cover


certain classes of risk.

Each of these categories is expanded below.

3.14 Assumptions
Univariate Normality of standardised log returns
The assumption is that a log return, standardised by an appropriate measure (in this case,
the instantaneous volatility), will conform to a normal distribution with measurable
variance. JP Morgan has presented the results of its analysis of conditional28 normality in
the markets [TD4: Appendix A].

28 Conditional on time

Page 73
Chapter 3: The RiskMetrics Methodology

Firstly, a series of Q charts for the major interest rate and FX markets29 show the
standardised quantiles of the sampled standardised return distribution, versus the quantiles
of the normal distribution N[0, 1]. If the sampled distribution is also normal, and no
sampling error is present (i.e. a large sample has been taken) the quantile-quantile plot will
form a straight line. Visual inspection of the line often shows some linearity of the quantile-
quantile plot around the central peak of the distribution, but considerable deviation from
the straight line in the tails. Money market prices show more deviation than foreign
exchange rates. Using the Q chart, it is possible to derive a correlation coefficient for the
two series of quantiles, which can be used as a test statistic for the hypothesis that the
sampled distribution is normal.

The results show that short term interest rate markets are not conditionally normal. JP
Morgan’s intuition for this result is that the intervention of central banks introduces a jump
element in the return distribution. By comparison, foreign exchange markets are broadly
conditionally normal, although the standardised log returns display evidence of non-
normality in the tails.30 This is examined in detail in [TD4], which presents tables of the
first four moments observed in 48 time series of foreign exchange rates.

Departure from conditional normality may not be an issue when deriving VaR numbers at
the 95% confidence limit, but the divergence is greater at the 99% level31. This is a problem
when RiskMetrics is being used with the regulatory dataset, which is captured at the 99%
confidence level. For management reporting purposes, therefore, it is not a material issue,
but for regulatory purposes, it may be.

Multivariate Conditional Normality


If the distribution of the risk dimensions, when taken as a whole, is not multivariate
conditional normal, then it is not strictly valid to assume that the 95% confidence limit of
the loss will occur when all the risk dimensions are themselves at the 95% confidence limit
of their losses, when adjusted for correlation. If a single risk dimension is found not to be

29 In version three of the document, many markets were covered, but by version four this was limited to the dollar/mark
exchange rate and 3 month sterling offer rate.
30 A common misconception is that RiskMetrics does not take into account fat tails. In fact, the tails of the non-
standardised return distribution can be fat even if the standardised return distribution is normal.
31 Pafka & Kondor (2001) argue that the choice of confidence limit of 95% has been one of the reasons for RiskMetrics’
comparative success, since many fat-tailed distributions diverge from the normal significantly only at higher quantiles.

Page 74
Chapter 3: The RiskMetrics Methodology

univariate conditional normal, then it follows that the combined distribution of all the risk
dimensions will not be multivariate normal. This will lead to an error in the VaR estimate.

Returns are serially uncorrelated


The assumption of no serial correlations allows the scaling of daily VaR by √t32. This
assumption means that each log return observation is independent of the previous one. A
counter-intuition to this might be that if a return is negative, then we expect the return for
the next period to be negative (i.e. we think it is more likely to be negative than positive).
JP Morgan demonstrated that there was little evidence of autocorrelation in the
USD/DEM foreign exchange markets or the US stock market [TD4: chapter 4]. However,
the variances of the returns were found to be autocorrelated. This is equivalent to an
intuition that if there is a large return of either sign, then the following day we can expect a
large return, either positive or negative. Under these circumstances, scaling by √ t may
underestimate the risk.

Alexander & Leigh (1997) analyse the performance of different volatility measures in
capturing the tail of a distribution. Using the RMSE, they show that the 1-day
exponentially weighted volatility estimate is generally a good measure of the centre of the
distribution. However, scaled 33 as a 10-day estimator of the 99% limit of the distribution,
the measure performs poorly, underestimating VaR enough times to concern a regulator34.

3.15 Features
Option Risk
The delta-weighting approach to option portfolios could lead to potentially significant
underestimation of the risks in the portfolio. Gamma (curvature) and vega (implied
volatility) risks can be very significant. Gamma risk can be particularly relevant when an
option is close to expiry, for which a small change in the underlying can change the
probability of positive pay-off from close to one to close to zero. Vega risk will be
significant whenever the option has a significant value. These are sources of P&L
fluctuation that are commonly found in derivative portfolios, and should be captured by a

32 This also implies that volatility remains constant for the period of the scaling, although RiskMetrics does not assume
constant variance.
33 The one day estimate of volatility is scaled by a factor of √10 x 2.33.
34 The equally-w eighted moving average that is used for the RiskMetrics regulatory dataset, although academically less
appealing, performs much better.

Page 75
Chapter 3: The RiskMetrics Methodology

VaR reporting framework. It is important to remember here that the genesis of VaR
reporting was the G30 report into derivatives trading. The RiskMetrics Group
recommends using simulation to model these exposures, which is an adequate response
where the required data and compute power is available. However, the RiskMetrics Group
does not make implied volatility data available for end users to simulate their volatility
exposure.

If speed and efficiency is important, then RiskMetrics proposes a way of modelling the
VaR directly from the portfolio moments, called the delta-gamma approach. The main
weakness of this approach is that the particular implementation35 is unique to RiskMetrics
– no other methodology has adopted a similar approach to measure option portfolio risk.
This technique is the subject of chapter 5.

Equities
A second weakness is the failure to account for specific equity risk. This is the component
of an equity price that is driven by market sentiment about the issuer of the equity, rather
than the stock market in general. RiskMetrics adopts an approach, common among VaR
practitioners, which assumes that these specific elements have a negligible effect on the risk
of the reference portfolio, mapping each equity to a single index in the country of issue.
Portfolio theory shows that specific risk sums to zero in a well-diversified portfolio 36. A
RiskMetrics measure will therefore not be representative of the equity risk of a reference
portfolio, if that portfolio is not well diversified.

More sophisticated treatments may differentiate between market sentiment about the
issuer, the country of issue, the main industries in which the issuer operates, and the global
stock market37. Most financial institutions will not be trading a well-diversified portfolio,
choosing instead to take concentration risk of one form or another, with the intention of
earning a better return than from a diversified portfolio, such as the market index. VaR
measures can be enhanced to include these extra risk dimensions, provided that the
appropriate data is available. In fact, Reuters collect some variance data for the Credit
Metrics methodology, which relies on the breakdown of equity prices by sector.

35 Any method that makes a quadratic approximation to the option value using the option gamma can be termed a delta-
gamma approach. Mina & Ulmer (1999) compare four applications of the delta-gamma approach.
36 See Elton & Gruber (1995), chapter 13
37 See Elton & Gruber (1995), chapter 16

Page 76
Chapter 3: The RiskMetrics Methodology

Chaumeton et al (1995) perform a four-factor analysis of equity returns, developing a


model with an explanatory power (measured by R2) of between 0.31 and 0.54. This tells us
that we ignore specific risk of equity returns at our peril – it may be up to 70% of our total
risk in an individual equity position.

Corporate Bonds
A third apparent weakness is the failure to account for corporate bond risk factors and
credit spread risk. Although appropriate yield curves can be used to value corporate risk
cash flows, they are mapped to either inter-bank or sovereign risk dimensions. No attempt
is made to model the volatility of the corporate bond price and the risk of the credit
spread.

The credit spread is a premium that the market demands for assuming a risk with less
certainty of repayment than sovereign debt in a first world economy. The market risk
manifests itself as a potential change in this premium through general market sentiment.
This risk can make corporate bond returns highly non-normal. The most extreme cases are
characterised by jumps, prompted by high profile defaults, such as Barings, the Russian
debt crisis, Enron and WorldCom. Unlike interest rate jumps, credit spread jumps are
asymmetric, and may be followed by a gradual decay as market confidence returns to
normal levels. However, credit spreads vary by smaller amounts on a daily basis.

To see the impact of this risk in more detail, we can consider a fixed income bond issued
by a low-rated institution, which has been swapped into a floating rate exposure with a
higher-rated institution. RiskMetrics would view this as a very low risk position, since the
fixed cash flows would net almost to zero, and the floating exposure bears risk only at the
first repayment date (the current fixing) and at maturity, which nets with the principal
repayment of the bond 38. However, if there is an increase in the credit spread, the cost of
replacement of the bond cash flows reduces, and so the bond price reduces. The swap
value is unaffected by this change.

The RiskMetrics team have argued that the spread risk, being a credit risk, falls outside the
market risk framework39. However, the risk is not accounted for in the RiskMetrics
Group's own credit risk methodology, CreditMetrics, which considers only the specific

38 There would be residual risk at maturity, since the bond cashflow would be priced using the appropriate risky curve,
whereas the swap notional would be priced using LIBOR.
39 Private communication from Jacques Longerstaey.

Page 77
Chapter 3: The RiskMetrics Methodology

counterparty-related risks of transition and default. This spread risk rightly sits within a
market risk framework, since it is a factor in the market price of the exposure. With the
growth of the credit derivatives market, the buying and selling of credit exposure, activity
must reach the point that financial institutions acknowledge this requirement and amend
their risk reporting infrastructures. The only argument for omitting the risk would be if the
daily volatility of the spread was small and the frequency of larger jumps small enough that
it belonged in a stress scenario rather than a management report.

Even discounting the impact of the credit spread, there is no reason why corporate bond
returns should be conditionally normal, with the same volatility as an interest rate swap.
The market sentiment towards the bond issuer will directly influence the return volatility.
This is an issuer-specific risk, akin to the specific risk of an equity discussed in the previous
section. Despite the presence of this risk, research has shown across a wide range of
markets that movements in underlying interest rates largely explain many bond returns.
Chaumeton et al. (1995) show that bond returns can be explained in terms of yield curve
shift and twist, with an explanatory power (again measured by R2 ) ranging between 0.88
and 0.97, over periods as long as nine years. These values are much higher than the
equivalent measures for the four-factor equity model. Unfortunately, the research does not
record the breakdown of the bonds by credit rating, but the sample could have been biased
towards investment grade securities, for which such long periods of data would be
available. Such securities could be expected to show low levels of specific risk.

Page 78
Chapter 3: The RiskMetrics Methodology

3.16 Conclusion
RiskMetrics is currently in its fourth edition, after being released six years ago. In this time,
many lesser financial institutions have adopted it as part or all of their risk management
methodology. The success of the methodology has come in spite of significant shortfalls in
the methodology. Many of the assumptions contained within the methodology do not
stand up to comparison with financial time series data. The methodology also has
significant omissions, particularly with reference to option risk, corporate assets and
equities. Some of these shortfalls have been addressed with expansion and revision. The
methodology has been expanded to take account of the non-linearity of options. The
ownership of the methodology, its stablemate, CreditMetrics, the RiskManager and
CreditManager software implementations of these methodologies, and the collection and
publication of data, has now been spun off into a separate entity, the RiskMetrics Group,
jointly owned by JP Morgan and Reuters. The RMG published an update to the
RiskMetrics technical documentation, Evolution of a Standard (Mina & Yi Xiao, 2001). This
made significant changes to the methodology, mainly to mirror changes to the
RiskManager software, which have come about as a direct result of feedback from
implementations. In the next chapter, we look at a theoretical implementation, at a
financial institution with a diversified trading portfolio, and we assess the features of
RiskMetrics that may still become problematic under such an implementation.

Page 79
4 RISKMETRICS IMPLEMENTATION

4.1 Introduction
The RiskMetrics methodology has been implemented successfully at many institutions
around the world. Despite this success, some institutions will have found the methodology
provides an incomplete model of their reference portfolios. All risk management
implementations require a degree of compromise, regardless of the methodology. These
compromises range from convenient assumptions about return distributions, through
coping with incomplete data, to approximations when modelling the return of an asset.
RiskMetrics is no exception. These compromises are what make a methodology practical, a
key requirement for a successful methodology. However, a risk manager must go into an
implementation with open eyes, weighing up each compromise, assessing the impact for
the reference portfolio, evaluating the materiality of a mis-statement of risk. For a
methodology to be successful, it must be seen to represent the risks of the reference
portfolio over a broad range of circumstances: 246 days out of 250 to satisfy the
regulator40. The weaknesses of RiskMetrics that we identified in the previous chapter will
lead to compromises for impacted portfolios. The materiality of any risk mis-statement will
depend on the characteristics of the trading. The risk manager must decide if this mis-
statement is acceptable, and consider the options available to tailor the methodology to
include the risk. In this chapter we look in detail at the kinds of portfolios that will cause
problems. We also look at the solutions available to the risk manager, and the factors that
may influence deciding between different approaches.

40 A 99% one-tailed VaR estimate should be exceeded one day in 100, or 2.5 days in 250. The regulator allows 4
exceptions in 250 days before putting a VaR model into the yellow zone, incurring a higher capital charge on the VaR
number generated.
Chapter 4: RiskMetrics Implementation

4.2 RiskMetrics in the trading room


Before reviewing the performance of RiskMetrics on typical portfolios at a bank, we review
some features of the RiskMetrics treatment that will challenge the risk manager, as
discussed in the previous chapter:

Feature Not valid for

Standardised logarithmic returns on Interest rate markets.


financial prices are distributed according to
the univariate normal distribution.

Standardised logarithmic returns on pairs of Interest rate markets.


financial prices are distributed according to
the multivariate normal distribution.

Returns are independent across time. Variances of returns in stock and


foreign exchange time series.

RiskMetrics data sets cover money market, Emerging market trading outside
swaps and government bonds in 18 markets, these data sets; volatility risk;
21 foreign exchange rates, 32 stock indices corporate bonds.
and eleven commodities.

RiskMetrics Delta-Gamma approach. Untested.

Specific risk not measured for corporate Non-diversified portfolios


liabilities.

Specific risk not measured for equity Non-diversified portfolios.


markets.

Table 13: Challenges of RiskMetrics

The following breakdown characterises the problems that a typical middle-tier bank would
encounter applying RiskMetrics.

Page 81
Chapter 4: RiskMetrics Implementation

4.3 Data
Gaps in the risk dimensions present problems in all markets. Whenever a deal is transacted
in a currency that is not contained within the data set, the risk manager must make a
decision on how that currency is to be treated. If the deal is one of only a few in that
currency, it is quite possible that the currency can be modelled using a proxy that is
contained within the data set. This becomes a straightforward decision if the currency is
tied to the proxy using a fixed exchange rate, although this raises the question of
devaluation risk 41. This method is the most common among practitioners.

An extension of this approach is to use a proxy, and then track the model risk involved in
using the proxy instead of the actual currency. This could be estimated, for instance, by
calculating a tracking error between the proxy currency forecast variance and the reference
currency return, and using this tracking error to estimate an add-on to the VaR calculation,
following the RiskMetrics processing. The procedure for incorporating a tracking error into
a VaR calculation is outlined in RiskMetrics Monitor (JP Morgan, 1996a: pp34-35)42.

If the bank is actively trading the currency, then the risk manager may decide to collate
historic data for the currency using the RiskMetrics methodology, if the data is available, or
else estimate the variance and covariance parameters in some other way, perhaps based on
a basket of similar currencies for which data is available. The cost of acquiring clean data
can be significant, so the risk manager should be sure that the risks being run justify the
cost of acquisition.

In the case that one or more proxy currencies have been chosen, the risk manager must
also look at the interest rate risk on the position. Again, the observed returns may exhibit
tracking errors compared to their proxies, which it may be important to capture if the
positions are large. Another case where proxies may be required is for stocks traded on
exchanges outside the data set. The risk manager must again assess the size and importance
of the business, and make a decision on whether to collect historical data from the
exchange, or use other exchanges in the same geographical region to approximate the risk.

41 In January 2002, the Argentine government ended a 10 year arrangement pegging the peso to the US Dollar at an
exchange rate of one dollar to one peso. By the end of May, the peso was trading at 3 peso to the dollar, despite an
interim dual currency system that made the official rate 1.4 peso to the dollar.
42 This deals with the slightly different case of a currency that is pegged to a basket of currencies in the dataset, but the
method can be used for the single currency case also.

Page 82
Chapter 4: RiskMetrics Implementation

4.4 Interest Rate Risk


Interest rate distributions include an element best explained as a jump process43. This is
required to capture the effect of central bank intervention in the interest rate markets, e.g.
raising the base rate by a quarter of a percent. The effect of this omission is that there is
more risk in the tails of the distribution than the conditional normal model implies. The
estimate of the 95th percentile of the distribution will therefore be incorrect. These errors
will combine across portfolio risk dimensions. This leads to an understatement of the
risk.44

What to do about this risk depends on the type of trading activity in the portfolio. For
many trading activities, interest rate risk is a by-product of imperfect hedging, and does not
represent a significant portion of the portfolio’s risk. This would be true, for instance, in
many equity trading operations. In this case, it would be safe to report the risk as if interest
rate log returns were conditional normal. However, if the objective is to trade interest rate
risk, as it is in a typical swaps trading operation, then it is likely that the understatement of
the risk is significant for the portfolio VaR, and the risk manager should consider using a
different distribution to the normal. An alternative model can be used in place of the
standard RiskMetrics covariance matrix calculations.

JP Morgan provides details of two alternative distributions for risk dimensions that are
observed to violate conditional normality: a normal mixture and a conditional generalised
error distribution [TD4: Appendix B]. The normal mixture is the RiskMetrics standardised
normal distribution, disturbed with a probability p < 1 by a sample from a second normal
distribution. The probability p represents the frequency with which a violation occurs. The
conditional generalised error distribution models returns rx,t in terms of the volatility σx,t as
follows:

rx,t = σx,tξ t, ξ t∼GED(ν) (4.1)

The volatility at time t is estimated using the normal RiskMetrics method, i.e. by calculating
an Exponentially Weighted Moving Average (EWMA) of historic log returns. The
parameter ν dictates the exact form that the generalised error distribution takes. If ν takes a

43 Hull & White (1993) describe the application of Merton’s Jump Diffusion model to option pricing.
44 The simulations in the following chapter adopt the RiskMetrics assumption of conditional normality without trying to
model the jump effect. This tends to make the RiskMetrics Delta-Gamma-Johnson approach look more reasonable
than it would be in reality.

Page 83
Chapter 4: RiskMetrics Implementation

value of 2, then this model reverts back to the normal distribution of standard RiskMetrics.
With parameter less than 2, the GED density function is more leptokurtotic than the
normal distribution, allowing greater likelihood of extreme returns.

JP Morgan applied both these models to a range of foreign exchange rates and equity
indices that might be thought of as emerging markets. Both models are shown to be a
significant improvement on the standard RiskMetrics conditional normal distribution.

Another alternative available to the risk manager is to use a multivariate t distribution to


estimate VaR for portfolios with heavy tailed risk dimensions (see for example Glasserman
et al (2000b)). Mina & Yi Xiao (2001) highlight the difficulty of using the multivariate t. If
different degrees of heavy tail are required for different risk dimensions of the portfolio,
the multivariate dependencies can no longer be parameterised using a single matrix. Both
Mina & Yi Xiao and Glasserman et al. outline the use of copula functions to model
dependency in this case.

Li (1999) derives an expression for the confidence limits of any portfolio that diverges
from the conditional normal assumption in terms of its first four moments, using
estimating functions. The expression for the lower limit is:

γ2 + 2

γ2 + 2

2

 + 4  α 2 2 (
 C (γ + 2 ) γ + 2 − γ 2
1

+ 1
)
γ1  γ1   γ1 
XL = µ + σ ,γ 1 ≠ 0 (4.2)
2

where C α is the corresponding confidence limit from the standard normal distribution
N(0,1), and µ, σ 2, γ1 and γ2 are the mean, variance, skewness and excess kurtosis of the
reference portfolio. Li applies the expression to foreign exchange rates to show that it is a
better model of the distribution tail than the standard RiskMetrics approach. The case
where γ1 = 0 is not considered as this would imply that the distribution is linear.

4.5 Equity Concentration Risk


In the equities markets, the risk manager may find risks concentrated in certain industries
or issuers. The RiskMetrics model for each country in the data set is akin to the Capital
Asset Pricing Model with a diversified portfolio. A single risk dimension is recognised, with
equity specific risk diversified away. Once again, the risk manager must assess the extent of

Page 84
Chapter 4: RiskMetrics Implementation

the concentration and its significance to the portfolio risk as a whole, and make a
judgement on the need for action.

One option for the risk manager is to collect data for individual stocks and assume, as per
the CAPM, that these risk dimensions are independent of all other risk dimensions in the
portfolio, allowing the risk manager to calculate VaR as the square root of the sum of the
squares. A second approach, which is probably more justified, is to use Arbitrage Pricing
Theory. This models common risk dimensions across issuers in addition to the market
index. The dimensions can represent industry sectors or macroeconomic variables as
required by the risk manager. These dimensions can be included in a variance-covariance
approach, with a correlation of zero between the dimensions, since they are by
construction independent of each other. The risk manager may even decide that it is safe to
assume that the portfolio is diversified apart from the concentration in these dimensions,
allowing simplification of the VaR calculation and reporting.

Some trading strategies intentionally hedge out the market risk. Pairs trading is an example
of such a strategy. The purpose of a pair trade is to identify two similar stocks within an
index, which normally exhibit highly correlated returns, compared to which they are
currently abnormally priced. The trader buys the relatively cheap stock and short sells the
relatively expensive stock. Provided that the stocks return to their previous return
distribution relationship, the trader can sell the previously cheap stock, and buy back the
previously expensive stock, realising a profit. The risk manager faces a significant challenge
with such a strategy. A plain vanilla RiskMetrics treatment will calculate that the risk of the
trade is zero, which is plainly not true. Even a diversified APT approach will probably
report zero risk. In this case, there are few options but to track the individual equity returns
or the specific components as risk dimensions.

4.6 Corporate Bonds


Corporate bonds do not exist as a separate risk dimension within RiskMetrics. They must
be mapped either as government bonds or inter-bank interest rate exposures. In reality,
corporate bond returns have embedded credit spreads, making the return on the bond
complex, and likely to be non-normal. Like equities, corporate bond prices contain an
element of variation that is specific to the issuer, rather than being dependent on the
market as a whole. The riskiness of the bonds leads to higher variances and lower
correlations when compared to risk-free debt, i.e. government bonds.

Page 85
Chapter 4: RiskMetrics Implementation

The risk manager with a portfolio of bonds must first see how the portfolio breaks down
by maturity and credit rating. If the portfolio is predominantly short-dated, high-grade debt
then the risk manager can use the results of Chaumeton et al. (1995) to justify ignoring the
specific risk. Depending on the size of the positions, it may still be worth applying a
tracking error adjustment based on the bond returns and the forecast variance for
government debt.

However, similar to the pairs trading situation in equities, this treatment may not be
enough if the position is hedged by a short position in treasury debt, or some other
structure that hedges out directional interest rate risk. In this case, the risk manager is
looking at the risk of the imperfect hedge between the treasury and the bond. The value of
this position is dependent on the spread between the treasury curve and the corporate
bond curve, which can change. The risk manager can try to track this spread as a separate
time series, although the historical data is not necessarily freely available. Some analysis
would also be required to justify a covariance treatment - it is not clear that this can be
assumed zero.

The risk manager may not find the trading portfolio weighted toward s high-grade debt. In
this case, it may be beneficial to model multiple credit spread curves as correlated risk
dimensions, with each curve representing the credit premium for a range of debt ratings.
Each bond must be mapped both to the government debt curve and a credit spread curve.

In extreme cases, the debt may be so close to default (or even in default) that other
measures are required. The risk manager can use the intuition that such debt behaves like
equity in the firm and should be tracked as a specific risk using the square root of sum of
squares rule. However, it may also be that the portfolio holder has taken a reserve against
part or all of the position, in recognition of the difficulty of realising the full value from the
debt. In this case risk should only be reported on any remaining value after the reserve has
been subtracted.

4.7 Option Risk


A bank can run significant option positions across most exposure types, from FX options
through caps, floors and swaptions, to equity/index options and callable/convertible
bonds. The call schedules on bonds are embedded options on the bonds, which carry
significant non-linear risk. Convertible bonds that convert into equity are hybrid

Page 86
Chapter 4: RiskMetrics Implementation

instruments with embedded equity options. Variance-covariance methods do not capture


options risk well. Typically, options are mapped as delta equivalent cash flows, which only
models a small part of the risk. This ignores the option exposure to changes in implied
volatility, and also the curvature of the option pay-off curve itself.

The risk manager must once again examine the style of the trading. If the options are
relatively simple, intended to be held to expiry, and there are no short option positions
(written options) in the portfolio, then this treatment may be adequate. However, if the
portfolio exists for the purposes of buying and selling options, is delta-hedged, or contains
naked written options, the risk manager must consider a more sophisticated treatment.

Non-Linear Risks
Payoff
Linear positions do not present a great
challenge for the risk manager. The 95%
confidence limit for the loss is calculated
directly from the weight of the risk dimension
F
and the 95% confidence limit for the change in
price for the risk dimension.

F Forward price of underlying

Figure 3: Payoff for a linear instrument

A simple option is typically presented as a right Payoff


Buy put
to buy (call) or sell (put) a particular commodity Buy call
(the underlying), such as a stock, future or interest
rate, at a given price (the strike). This right will
F
be exercised by the buyer of the option, resulting K F K

in a cash flow or physical delivery fro m the


writer to the buyer, if the option is in the money at
Write put Write call
expiry, meaning that the act of exercising the
option, and then closing out the position,
created in the exercise, at the current market K Strike
F Forward price of underlying
price, results in a net positive cash flow to the
Figure 4: Option payoffs at maturity
buyer. If the option is out of the money, it will be
allowed to expire without exercise.

Page 87
Chapter 4: RiskMetrics Implementation

The option may be written by a financial institution, such as a bank, a buy-side institution,
such as a corporate treasury or local government agency, or it may be listed on an
exchange. The buyer of the option pays a premium to the writer or a margin to the
exchange. This premium is in exchange for the intrinsic value of the option at that time (if
the option is in the money) and the future value that the option may have (the time value).

The value of a standard option at any


Payoff
time is dependent on the current forward
Buy call
(at the money) market price of the underlying, the
market’s estimate of the volatility of the
forward price, the amount of time to
K
F expiry of the option, the strike price, and
whether the option is European
(exercisable only at the expiry date) or
American (exercisable between now and
Option value before expiry

Buy Forward
Effect of increasing volatility
the expiry date). Options with additional
Effect of decreasing time
to expiry features are referred to as exotic. These
are mostly designed to make the option
Figure 5: Option value curves
more attractive to the buyer.

Option positions are traditionally troublesome for risk managers. The pay-off curve of the
option is highly non-linear, owing to the pay-off discontinuity that occurs between the
option being in and out of the money when the option expires. The pay-off curve also
drifts downwards with the passing of time (as the time value of the option decreases), and
jumps around as the volatility of the underlying changes. This behaviour is markedly
different to many of the instruments that a risk manager will deal with, for which the
relationship between price and value is actually or very nearly linear, and dependent on only
one source of price variance. For example, the present value of an interest rate swap will be
a linear function of the swap rate to maturity, with other small, linear dependencies on
intermediate rates prior to the maturity of the swap.

Generally, the harder it is to hedge a particular style of option, the harder it will be to
incorporate it within standard risk management procedures. This is because the problems
the trader faces when hedging a position are the same problems the risk manager faces

Page 88
Chapter 4: RiskMetrics Implementation

when mapping the position on to simpler instruments that are processed correctly by the
risk management methodology and systems.

The following table sets out some example exotic features that make hedging and risk
management complex:

Exotic feature Description Issues


Asian/Average Option pays out on the difference between an Price is path dependent, so
average of the market rate in a set period, rather revaluation for simulated market
than the rate on exercise. The volatility of an scenarios is time consuming.
average rate is less, so the option is cheaper.
Lookback Option pays out on the difference between the Price is path dependent.
strike price and the highest or lowest market
price in the time period.
Barrier Option pays up to a maximum of barrier - Linear approximation to value is
strike (call) or strike - barrier (put). incorrect if underlying has
penetrated barrier.
Knock out/in Option becomes inactive if barrier is touched or Linear approximation to value is
crossed incorrect if underlying has
penetrated barrier.
Digital/binary Option payout is a fixed value, provided that it Linear approximation to value does
is in the money, and not dependent on the not apply.
difference between strike price and market rate.
Chooser option Option can be either a put or a call Sign of cash flow is uncertain.
Table 14: Exotic option features

4.7.1 The Risk Management Challenge


One of the early weaknesses of JP Morgan’s RiskMetrics, in common with other variance-
covariance approaches to Value at Risk, was its inadequate handling of option positions.
Variance-covariance approaches capture the linear price relationships of assets, rather than
the assets themselves, to build an efficient representation of the risk profile of the
portfolio. The motivation behind this is that large transaction volumes make it impractical
to deal with the transaction data itself. Past market behaviour is used to derive estimates of
the variances and covariances applicable to the asset price relationships. From these, the
Value at Risk is calculated.

Page 89
Chapter 4: RiskMetrics Implementation

Delta-weighting
The option delta represents the instantaneous hedge in the underlying that would be
required to insulate a portfolio containing the option and hedge from small changes in the
underlying price. More formally, it is the rate of change of the option value with respect to
the underlying price. A linear representation of an option in the variance-covariance
methodology uses the option delta to model price sensitivity to the underlying.

Gamma Risk
The option gamma is the rate of change of the option delta with respect to change in the
underlying. As such, it is not truly a different risk to delta risk, but it captures at least part
of the model risk from using the delta as an approximation to the option payoff curve. The
variance-covariance approach assumes that instrument values are linear functions of prices,
so that a position can be represented simply by its sensitivity to one or more market rates.
The non-linear elements of the option return, attributable to the gamma (curvature) of the
option pay-off, do not fit into this model.

Vega Risk and Theta Risk


Vega is change in value of the option due to a unit change (i.e. 100%) in the implied
volatility of the underlying. Theta is the change in value of the option due to a unit change
in time (i.e. one year). The elements of the option price that are dependent on volatility and
time cannot be correlated using linear regression. Correlations between two prices tell us
that if one price goes up, we have an expectation that a certain amount of the second price
behaviour will be linked, and it will either also tend to go up, or else it may tend to go
down. Whilst there are relationship s between the change in a price and the change in its
volatility, these are more subtle than can be captured by simple correlations. Example
relationships between prices and volatilities are: if prices are high, then volatilities become
low; if prices move dramatically, either up or down, then the volatility will go up. Time
cannot be correlated – we will always lose a day. So, the variance-covariance method
cannot be used to model these risks.

RiskMetrics offers two routes for risk managers who require more accurate treatment of
their options risk. The first is simulation. This requires a large number of simulations of the
joint distributions. Mina & Yi Xiao (2001) show how the joint distribution can be obtained
from a set of samples from the standard normal distribution and the Cholesky
decomposition of the covariance matrix. The Cholesky decomposition is a factorisation of

Page 90
Chapter 4: RiskMetrics Implementation

the matrix that contains elements only on the diagonal and above it. If the one-period
variance-covariance matrix is represented by Σ , the Cholesky decomposition A satisfies

Σ = ATA (4.2)

The Cholesky decomposition of the covariance can be obtained from the following
recursive equations [TD4: Appendix E]:

1
 i −1
 2
a ii =  sii − ∑ aik2  (4.3)
 k =1 

and

i  i −1

a ij =  sij − ∑ a ika jk , j = i + 1, i + 2,..., N (4.4)
aii  k =1 

where aij is the decomposition matrix element on the ith row and the jth column, and sij is the
corresponding element in the covariance matrix.

Given the matrix A, the transpose AT is used to map a set of random draws from the
standard normal distribution to a set of random draws from the joint distribution of all risk
dimensions as follows:

i. For each risk dimension Di, i = 1 to n, make a random draw from the standard
normal distribution Zi ~ N(0,1)

ii. Make a n x 1 matrix Z of the draws

iii. Perform a matrix multiplication with the transpose of the Cholesky decomposition:

Y = Z.AT (4.6)

The resultant n x 1 matrix Y contains, for each risk dimension Di, a random draw Yi from
the n-variate normal return distribution with one-period covariance matrix Σ. This is
transformed back to a one-period simulated future price, P1i , using the assumption that log
returns are multivariate conditional normal:

Page 91
Chapter 4: RiskMetrics Implementation

P1i = P0i e σ iYi (4.7)

A Singular Value Decomposition (SVD) or eigenvalue decomposition (ED) can also be


used in this way [TD4: Appendix E].

Once the simulated values Pt i of the risk dimensions Di are known, the reference portfolio

can be revalued, provided that the underlying prices of the portfolio can be determined
completely from the risk dimensions. This process is repeated a large number of times to
obtain the P&L distribution of the portfolio. The 95% 1-day VaR is simply the lower 5th
percentile of the P&L distribution.

The main problem with the approach is the computational cost. Institutions may find that
the cost of simulating their entire option portfolio is just too great. Glasserman et al
demonstrate the benefits of some optimisation techniques for the simulation in both
normal (2000a) and heavy-tailed (2000b) simulations. JP Morgan proposes using delta,
gamma and theta to approximate the option revaluation, speeding up the revaluation
process for each simulation [TD4: chp 7]. Mina & Ulmer (1999) demonstrate the benefits
of using a delta-gamma approximation in a simulation to achieve computational
efficiencies. Pritsker (1997) describes two forms of grid Monte Carlo approach, in which
the pay-off curves of a portfolio of options are approximated by first revaluing the options
across a narrowly spaced grid of underlying prices. The revaluation of the portfolio after
each simulation is obtained by interpolating between the grid points.

The second alternative open to the risk manager, within the RiskMetrics methodology, is to
develop a model of the portfolio P&L distribution without resorting to simulation. This
can be achieved by fitting the moments of the portfolio distribution45 to one of a family of
curves, named Johnson curves, after the first author to use them in this way. This
procedure is explained in detail in the next chapter. The main problem with this approach
is that its current use within risk management is not extensive, and there is relatively little
information available to help risk managers implement the approach. Mina and Ulmer
(1999) refer to some practical difficulties in implementing the methodology. In chapter 5,
we will see how the approach can be used effectively to model the price risk of interest rate
caps.

45 The portfolio in this case would be the entire set of trades, whether linear or non-linear, for which the risk manager
wished to calculate a VaR measure.

Page 92
Chapter 4: RiskMetrics Implementation

Outside of the RiskMetrics methodology, there are other options open to the risk manager.
The findings of Li (1999) can also be used to model the confidence limit of a portfolio of
options, using its first four moments. The procedure for deriving the moments of a
portfolio of options is described in chapter 5.

Britten-Jones & Schaefer (1999) use the delta-gamma approximation to express the
multivariate distribution of the reference portfolio as a sum of non-central chi-squared
variables. They match the first three moments of the distribution to a single chi-squared
distribution, allowing the VaR to be read from chi-squared tables. They show that, for the
example option portfolio considered, the cumulative density function follows the form of
the full valuation result, but with slightly thicker tails.

Keating & Shadwick (2002) demonstrate that significant information about a portfolio’s
distribution may be contained in its higher moments. Their purpose is to question the use
of variance in stock performance measures, but the point applies equally to moment
matching a portfolio distribution. The essence of the argument is that ignoring a higher
moment is not equivalent to ignoring the higher orders in a polynomial expansion like a
Taylor series. The higher moments can make a significant difference to the shape of a
distribution. For this reason, it is preferable to estimate the VaR in such a way that the
whole distribution function is modelled, including these higher moments.

Mina and Ulmer (1999) use a Fourier inversion of the quadratic form of the moment
generating function to obtain the portfolio pdf numerically using Fast Fourier Transforms
(FFT). Their results show that this method outperforms Full Monte Carlo for large
portfolios, owing to the greater computational time required for full revaluation. However,
if Monte Carlo is performed using the delta and gamma to approximate the option value,
the performance is equivalent to FFT. Given the comparative ease of implementation of
the Monte Carlo approach, it is hard to justify using the more demanding Fourier
Transform approach in its place.

Pritsker (1997) evaluates the performance and accuracy of six methods for measuring the
risk on a range of portfolios of foreign exchange options, including delta weighted
underlyings, delta-gamma Monte Carlo, grid Monte Carlo and full Monte Carlo. He found
that the Monte Carlo methods performed better than the other methods tested on simple
portfolios of naked option positions, across a range of maturities and strike prices.
However, all methods except full Monte Carlo made large errors when used on a mixed

Page 93
Chapter 4: RiskMetrics Implementation

portfolio of long and short options over a range of currencies. A full Monte Carlo run took
nearly a thousand times as long as the delta-only run. The delta-gamma Monte Carlo run
took only 50 times as long, making it an attractive compromise between accuracy and
computational time for simple option portfolios. For more complex portfolios, it seems
that full Monte Carlo is the only acceptable method of calculating risk, among those tested.

4.8 Conclusion
The risk manager implementing RiskMetrics will face several challenges along the way. In
all cases, the first rule is to use a solution appropriate to the size of the problem. This
should be assessed both in terms of the volume of positions and the materiality of the risk.
The main issues stem from distributional assumptions with regard to financial time series
data and the treatment of options risk. Some asset classes, such as equities and, to a lesser
extent, corporate bonds, have such complex return series that RiskMetrics is omitting
important dimensions of risk. Other asset classes, such as interest rates, simply do not
behave in the way that they are assumed to.

If the risk manager determines that some tailoring is required, then several additional
reference points exist outside the methodology. When risk dimensions are omitted, the risk
manager faces more practical issues than theoretical ones. In theory, the framework of the
methodology can be extended to include the new risk that would otherwise be omitted.
This approach will also work with new instances of existing asset classes, such as currencies
or stock indices that are not contained within the RiskMetrics dataset. However, the risk
manager is now shouldering the burden of collecting the data and calculating joint
variance-covariance matrices for the new risk dimensions. From a practical point of view,
the risk manager may prefer to track the error between the RiskMetrics model and the real
portfolio and use this information to calculate a tracking error that forms the basis of a
VaR add-on.

Page 94
5 USE OF JOHNSON TRANSFORMATION IN
RISKMETRICS
5.1 Introduction
In the previous chapter we saw that RiskMetrics offers three methodologies for measuring
option risk: delta equivalent cash flows, delta-gamma using the Johnson transformation,
and simulation. Of these three methodologies, the Johnson transformation approach has
attractive features that make it worthy of further assessment. It combines the speed of an
analytic solution for VaR with the flexibility required to model option risk. Mina & Ulmer
(1999) note that the limited set of shapes available for the probability density function
means that the Johnson approach is not robust with real life portfolios. In this chapter, we
examine whether or not the approach captures the risk of interest rate caps.

5.2 The Standard RiskMetrics Approach


A common compromise reached for parametric VaR methods is to map the option
position as its delta equivalent cash flows. The delta of the option represents the rate of
change of the option price with respect to a change in the underlying, but it is also the
proportion of the underlying that should be held to hedge the option against price
movements. If the holder of the option has delta-hedged the position, the institution
already has short positions in delta-equivalent cash flows, and this risk treatment will make
the overall position appear risk neutral. However, as we have seen, the pay-off of an option
diverges significantly from its delta equivalent cash flows. Risk neutrality will be far from
the truth. If this is a significant part of the risk within the reference portfolio, the risk
manager must model the other aspects of option risk.

5.3 Delta-Gamma
JP Morgan proposed an algebraic solution to the option problem, which it calls the delta-
gamma approach [TD4: §6.3.3]. The proposal uses a significantly different approach to
mapping the portfolio transactions. Using analytical methods, the methodology user
calculates the first four statistical moments of the portfolio’s distribution: mean, variance,
skewness and kurtosis. Linear positions within the portfolio will contribute only to the
mean (additively) and variance (using covariance coefficients). Option positions contribute
additionally to the skewness and kurtosis. These moments are used to fit one of a family of
curves to the portfolio return distribution. The family of curves has the property that, by
Chapter 5: Use of Johnson Transformation in RiskMetrics

substitution, the curve can be transformed to a standardised normal curve. The


methodology user exploits this relationship to calculate VaR.

Published research on the methodology is relatively sparse. Mina & Ulmer (1999) note that
the methodology was difficult to fit to some of their test portfolios. Pritsker (1997)
mentions the methodology for the sake of completeness but does not evaluate it. No
research has provided data to accept or reject the hypothesis that the quality of the fit
derived from the first four portfolio moments is demonstrably better than the delta-normal
approach.

This chapter will look at the detailed mechanism of non-linear risk management in release
4 of the technical documentation, the current version (2002, Dec). The approach will be
explored using interest rate caps, a standard instrument within a financial institution’s
portfolio. Firstly, the chapter will provide an overview of the process and calculation of
non-linear risk in RiskMetrics, using the delta-gamma-Johnson approach. Secondly, it will
describe an experiment, to establish the accuracy of the approach. Thirdly, it will present
the results of the experiment. Finally, the chapter will present conclusions on the
applicability of the approach to the test portfolio.

5.4 Non-linear market risk in RiskMetrics


After Pritsker (1997), we will call the JP Morgan proposal the delta-gamma-Johnson
approach, to distinguish it from other delta-gamma approaches, including the delta-gamma
simulation approach also in RiskMetrics46. In so doing, we recognise that the delta-gamma-
Johnson approach is one of a family of approaches which use the asset delta (rate of
change of asset value with respect to the underlying) and gamma (rate of change of asset
delta with respect to the underlying) as an approximation to the change in value of the
asset given a change in the underlying. This is equivalent to stating that, when
approximating the change in value of the asset, the approach uses a Taylor series expansion
including a quadratic term in the underlying. Delta-gamma approaches can be
fundamentally different to each other, but they all have this in common47. This assumption
provides a simplification in comparison to the computational cost of full revaluation48.

46 Delta-gamma Monte Carlo in Pritsker’s nomenclature


47 Other delta-gamma approaches are described in Chapter 4
48 Comparative timings for portfolios of foreign exchange options are set out in Pritsker (1997)

Page 96
Chapter 5: Use of Johnson Transformation in RiskMetrics

Where options are held in large volumes, this provides great benefit, through the timely
delivery of a VaR measure.

Under the delta-gamma-Johnson approach, the reference portfolio return is modelled as a


non-normal distribution, using the option delta, gamma, vega (rate of change of value with
respect to implied volatility) and theta (rate of change of value with respect to the passing
of time) to estimate the mean, variance, skewness (location of the distribution peak in
relation to the mean) and kurtosis (sharpness of the peak in the distribution). Together,
these characteristics of the distribution are known as the first four moments about the
mean. The methodology provides a recipe for calculating the moments for a basic Black-
Scholes model. Using these moments, the distribution is fitted to one of a family of
functions, first identified by Johnson (1949, referenced in [TD4]). These functions all have
the property that, by substitution, they can be transformed to a standard normal
distribution, i.e.

) r − ξ )
rt = γ + δ . f ( t ) , rt ~ N ( 0 ,1 ) (5.1)
λ

This property is used to find the VaR of the reference portfolio, using the following
rationale:

If we have found a function f to satisfy (5.1), then the values X of rt are given by the
)
corresponding values Z of rt .

Z −γ 
X = λ . f −1  +ξ (5.2)
 δ 

The required value of Z is obtained in the usual way by integrating the probability density
)
function for rt between − ∞ and α , where α is the desired confidence level for the VaR

estimate, i.e.

Z α = Φ(α ) (5.3)

where Φ (x) is the cumulative density function for the standard normal distribution. For
instance, for an estimate of 95% VaR, w e have

Page 97
Chapter 5: Use of Johnson Transformation in RiskMetrics

Z 95 = Φ (0.05 ) = −1 .64 (5.4)

which, using (5.2), gives

 − 1.64 − γ 
X 95 = λ . f −1   +ξ (5.5)
 δ 

The next section looks at this procedure in greater detail.

Positions

Calculation Derivation
Curve Fit
of Moments of Returns

Option greeks &


implied volatility

Figure 6: Delta-Gamma-Johnson Method

5.5 Delta-Gamma-Johnson Method


The delta-gamma-Johnson method is principally the same whether performed on a single
option or a whole portfolio. It is composed of three steps: Calculation of Moments, Curve
Fit and Derivation of Returns.

5.5.1 Calculation of Moments


The moments of the portfolio are calculated analytically or numerically from the
constituent options. Following Mina & Ulmer (1999), we first write the portfolio value V in
terms of a non-linear function of the values of a set of n time series variables x i.

V(x) = f(x 1, x2, …, xn) (5.6)

dx
The returns of x, r = , follow a multivariate normal distribution with covariance matrix
x
Σ. The change in portfolio value is given by a Taylor series expansion to second order:

dV = V ( x + dx) − V ( x) (5.7)

Page 98
Chapter 5: Use of Johnson Transformation in RiskMetrics

n
1 n n
≈ ∑δ iri +
i =1
∑∑ Γijrirj
2 i =1 j =1
(5.8)

where

∂V ∂ 2V
δ i = xi , and Γij = xi x j (5.9)
∂ xi ∂xi ∂ x j

The values δ i and Γij are simply aggregate option deltas and gammas expressed in terms of a
change in the portfolio value. These are obtained by scaling the theoretical deltas and
gammas to take account of the size of the position and the value of the underlying. They
can also be obtained approximately by performing a limited set of scenarios across the
market data set x. To the delta numbers, we also add any exposures from linear
instruments in the portfolio.

Now that we have an expression for the change in portfolio value, we use the results of
Mina & Ulmer for the portfolio moments about the mean, in matrix notation49:

1
µ1 = tr (ΓΣ) (5.10)
2

1
µ 2 = δ ′Σδ + tr ( ΓΣ)2 (5.11)
2

µ3 = 3δ ′ΣΓΣδ + tr (ΓΣ)3 (5.12)

µ4 = 12δ ′Σ(ΓΣ )2 δ + 3tr (ΓΣ )4 + 3µ22 (5.13)

We will see how these measures perform with the Johnson fitting process.

49 These expressions follow from the RiskMetrics assumption of normality

Page 99
Chapter 5: Use of Johnson Transformation in RiskMetrics

To perform a fit, we need the third and fourth moments expressed in coefficient form, i.e.

µ3
skewness coefficient = 3
(5.14)
µ2 2

µ4
Kurtosis coefficient = (5.15)
µ 22

This is an input requirement of the fitting algorithm.

5.5.2 Curve Fit


The next step is to fit a curve, for which we know the function, to the portfolio
distribution. This will allow us to perform manipulations of the portfolio distribution,
making it into something that we can deal with simply. The first four moments of the
portfolio return distribution are used to fit the distribution to a Johnson distribution with
identical moments. The distributions come in four families: normal, lognormal, unbounded
or bounded. We saw earlier that Johnson suggested these families as specific
implementations of a generic transformation:

) r − ξ
rt = γ + δ . f ( t ) (5.1)
λ

to map a given distribution onto the standard normal curve. The values of ξ, λ, γ and δ are
derived during the fitting process, as is the identity of the function f. The reason for using
these families is that the ability to transform the portfolio distribution to a standard normal
distribution leads to a very elegant statement of the VaR in terms of the inverse transform.

The forms that f(X) can take 50 are:

Normal f (X) = X (5.16)

Lognormal f ( X ) = ln( X − ξ ) , X ≥ ξ (5.17)

50 These are the forms that were suggested by Johnson (1949). There may be other forms, but this question has not been
considered for this thesis.

Page 100
Chapter 5: Use of Johnson Transformation in RiskMetrics

 X −ξ 
Bounded f ( X ) = ln   , ξ ≤ X ≤ ξ + λ (5.18)
ξ + λ − X 

X −ξ 
Unbounded f ( X ) = sinh −1  (5.19)
 λ 

These forms of f(X) lead to the following inverse transformations to obtain the cdf of X:

Z −γ
Normal f −1 ( Z ) = (5.20)
δ

Z −γ
Lognormal f −1 ( Z ) = e δ
+ξ (5.21)

Z −γ

(ξ + λ )e δ

Bounded f −1 ( Z ) = Z −γ
(5.22)
1+ e δ

−1 Z −γ 
Unbounded f ( Z ) = λ sinh  +ξ (5.23)
 δ 

To perform the fit, we can use an algorithm documented in Hill et al (1976, cited by
[TD4]): algorithm 99, implemented in FORTRAN. The outputs will determine the values
δ, γ, λ , ξ, and the function f. Although the detail of the algorithm is omitted from this
paper, we can review the decision mechanism used by the algorithm to select a particular
fit.

Using the terminology of Hill et al., if we define β 2 as the kurtosis coefficient and β 1 as

the skewness coefficient, lognormal fits are all located on a curve in the β 1β 2 plane. Given a

value of β 1, the value of β 2 corresponding to a lognormal fit is known. If the input value of
β2 is different, then this points to a bounded (within the curve) or unbounded (outside the
curve) fit. In each of these cases, an algebraic solution is not possible to determine the fit
parameters. The fitting algorithm uses numerical techniques to approximate the fit.

5.5.3 Return Distribution


As outlined in section 5.4, the VaR of the portfolio can be derived directly from the
Johnson fit and the appropriate VaR confidence limit. The Johnson mapping is inverted to

Page 101
Chapter 5: Use of Johnson Transformation in RiskMetrics

give the mapping from a standard normal curve to the portfolio return. In the generic case,
(5.1) is inverted to give

−1 Z −γ 
X = λ. f   +ξ (5.2)
 δ 

If we want to measure 2 standard deviations of VaR, we simply enter a value for Z of -2 in


the inverted transform and calculate directly the return which this corresponds to, given
our fitted values of γ, δ, λ, ξ and f. Note that this is not a proportional return. We calculate
moments for the distribution of the change in portfolio value, and the inverse transform
gives us the cumulative density function for the change in portfolio value. The portfolio
VaR is simply given by:

max( −1* X Z =−2 ,0) (5.24)

5.6 Worked Example


Cap 1 in our reference portfolio is a short dated caplet, on the 3v6 FRA51. A caplet is an
option to take out interest rate risk protection at a particular pre-determined level. If the
holder of the option exercises the contract, they enter into the underlying FRA, the 3v6
FRA in this case. Series of caplets with the same strike price are normally traded as a Cap,
which is a convenient way of buying interest rate protection for a longer period of time.
The portfolio is observed from the perspective of a valuation date of September 30, 1996.
The notional is $1m and the caplet is struck at the money. The following table sets out the
summary characteristics of the trade, leading to an option valuation of 0.001034, in terms
of the underlying forward rate. 52

Today start date end date strike rate


30-Sep-96 30-Dec-96 27-Mar-97 5.992%
Basis volatility Discount Discount
factor at start factor at end
Actual/365 8.930% 0.98545213 0.97157526
Table 15: Trade Data and Market Data for Cap 1

51 3v6 here indicates that the FRA (forward rate agreement) is for the forward rate between two dates, a start date three
months from now (more exactly spot) and an end date six months from now. The FRA is a contract to pay out the
interest value on a fixed rate loan between the start and end dates and receive the interest value of a floating rate loan
for the same period. The caplet is an option on the forward floating rate with strike price set to the fixed rate.
52 To get the cash value of the option, this number must be scaled by the interest period and the nominal, as with a
standard simple interest calculation, however these factors are not important for measuring the portfolio return. We
have in effect constructed a portfolio of caplets, each with notional value $1/t, with t the fraction of a year for which
the rate applies, in the appropriate basis.

Page 102
Chapter 5: Use of Johnson Transformation in RiskMetrics

To discover the option sensitivities to the RiskMetrics vertices, we run a series of scenarios
on the option. The set of prices Pi for the risk dimensions Di may be represented as the
vector P. The value of the portfolio, V(P), is calculated for 4 scenarios for each element in
the covariance matrix. Following market practice, the scenarios calculated on the diagonal
of the matrix will allow us to imply values for delta and gamma with respect to a single risk
dimension Di. The off-diagonal elements allow us to imply partial derivatives with respect
to two dimensions, Di and Dj.

We could derive the greeks analytically, but the scenario approach is generic enough to be
applied to any instrument for which you can derive a theoretical price, given a set of
market conditions. The only constraint is that, given the size of the RiskMetrics covariance
matrix (approximately 500 elements square), it would be necessary for the risk manager to
develop some notion of dependency for the trades in the reference portfolio, so that
revaluation only occurs when it is relevant to the trade.

The scenarios for which we value the option, to calculate the delta and the gamma of the
portfolio with respect to a single interest rate risk dimension Di with continuous
compounded value Pi are:

Pi-5 = {P0, P1, P 2….P i-5bp,…..P n) (5.25)

Pi-1 = {P0, P1, P 2….P i-1bp,…..P n) (5.26)

Pi+1 = {P0, P 1, P2….P i+1bp,…..Pn) (5.27)

Pi+5 = {P0, P 1, P2….P i+5bp,…..Pn) (5.28)

For an off-diagonal element, we derive the cross gamma between risk dimension Di and
risk dimension Dj in the portfolio. The four scenarios required to calculate the element are
as follows:

Pi-1, j-1 = {P0, P1, P2….Pi-1bp,… Pj-1bp, ..P n) (5.29)

Pi+1, j-1 = {P0, P1, P 2….P i+1bp,… Pj-1bp, ..Pn) (5.30)

Pi-1, j+1 = {P0, P1, P 2….P i-1bp,… P j+1bp, ..Pn) (5.31)

Pi+1, j+1 = {P0, P 1, P2….P i+1bp,… Pj+1bp, ..Pn) (5.32)

Page 103
Chapter 5: Use of Johnson Transformation in RiskMetrics

We calculate an average delta exposure to the dimension Di, a continuous compounded


interest rate Pi at time ti, as the difference between the portfolio valuation V(P) for a +1bp
shift and a -1bp shift in the rate Pi, divided by the total change in Pi (i.e. 2bp). The delta is
then calculated as:

1 V ( Pi +1 ) − V ( Pi −1 )
∆i = . (5.33)
ti 2bp

The gamma exposure to the dimension Di (a diagonal element of the gamma matrix) is
given by the valuation difference between a +5% shift in the rate and the current value,
stripping out the delta of the position53.

1  2(V ( Pi +5 ) − V (P )) − 5(V (Pi +1 ) − V (Pi −1 )) 


Γi = .  (5.34)
t i 
2
5bp 2 

The gamma exposure to a combination of two dimensions Di and Dj is given by the


change in delta with respect to i given a unit change in j. Using the definition of delta that
leads to (5.25), but with the off-diagonal scenarios, we have:

1  (V ( Pi +1, j +1 ) − V ( Pi +1, j −1 )) − (V ( Pi −1, j +1 ) − V ( Pi −1, j −1 )) 


.  (5.35)
t it j  4bp 

For this Cap, following the method described above, we obtain the following values for the
delta and gamma matrices


3M -2.08
6M 2.08
Table 16: Delta Exposure for Cap 1

Γ 3M 6M
3M 1,151 -0.114
6M -0.114 1,132
Table 17: Gamma Exposure for Cap 1

53 This is the up gamma – it is also possible to calculate the down gamma. This may be more appropriate for some
portfolios.

Page 104
Chapter 5: Use of Johnson Transformation in RiskMetrics

Here, the construction of the cap to start and end exactly on RiskMetrics vertices has led to
a strong dominance of the diagonal elements of the gamma exposure matrix. The next step
towards calculating the moments of the cap is to know the covariance matrix for these
rates. In this case, we are using a covariance matrix as of November 1996, which is
representative of the matrix on September 30. The full matrix must be used for the
moment calculations, or at least all non-zero covariances with these rates must be included.

1M 3M 6M 1Y 2Y 3Y 4Y 5Y 7Y 10Y
1M 3.16E-11 1.19E-10 2.58E-10 6.31976E-10 7.36E-10 1.09E-09 1.14E-09 1.3E-09 1.5E-09 1.71E-09
3M 1.19E-10 8.54E-10 1.87E-09 5.41968E-09 2.81E-09 5.74E-09 6.1E-09 6.83E-09 8.48E-09 1.06E-08
6M 2.58E-10 1.87E-09 6.99E-09 2.10603E-08 1.16E-08 2.26E-08 2.81E-08 3.74E-08 4.79E-08 6.67E-08
1Y 6.32E-10 5.42E-09 2.11E-08 8.22112E-08 2.79E-08 6.85E-08 9.6E-08 1.21E-07 1.65E-07 2.39E-07
2Y 7.36E-10 2.81E-09 1.16E-08 2.7917E-08 4.01E-07 5.65E-07 7.27E-07 9.11E-07 1.11E-06 1.48E-06
3Y 1.09E-09 5.74E-09 2.26E-08 6.8464E-08 5.65E-07 8.38E-07 1.09E-06 1.36E-06 1.68E-06 2.26E-06
4Y 1.14E-09 6.1E-09 2.81E-08 9.59938E-08 7.27E-07 1.09E-06 1.48E-06 1.81E-06 2.26E-06 3.05E-06
5Y 1.3E-09 6.83E-09 3.74E-08 1.20673E-07 9.11E-07 1.36E-06 1.81E-06 2.29E-06 2.85E-06 3.9E-06
7Y 1.5E-09 8.48E-09 4.79E-08 1.65232E-07 1.11E-06 1.68E-06 2.26E-06 2.85E-06 3.62E-06 4.93E-06
10Y 1.71E-09 1.06E-08 6.67E-08 2.3883E-07 1.48E-06 2.26E-06 3.05E-06 3.9E-06 4.93E-06 7.05E-06
Table 18: Covariance matrix for USD tenors, November 1996 (Price source: Reuters)

We then use (5.10) – (5.15) directly to calculate the moments of the Cap as

µ1 4.45E -06
µ2 1.34E -04
µ3/σ3 0.169
µ4/σ4 3.04
Table 19: Mina & Ulmer calculated moments for Cap 1

We can verify these calculated moments by simulating the underlying and calculating the
option price for the caplet. We calculate two revaluations for each scenario, using the full
revaluation and the delta-gamma approximation to the option value. For each of these
cases, we use the simulation results to calculate moments of the distribution. These are
shown below:

Simulation DG sim
µ1 2.35E-06 5.5E-06
µ2 1.34E-04 1.35E-04
µ3/σ3 0.106 0.246
µ4/σ4 3.03 3.10
Table 20: Simulated Moments for Cap 1 using full revaluation and Delta-Gamma approximation

We should expect that the correct delta-gamma moment calculation should be close to the
moments calculated from the delta-gamma approximation. However, the moments

Page 105
Chapter 5: Use of Johnson Transformation in RiskMetrics

calculated using Mina & Ulmer are closer to the moments calculated using full revaluation
on the simulated data. We should expect that these moments will perform better at fitting a
transform to represent the simulated distribution.

The next step is to use the fitting algorithm to obtain values for δ, γ, λ and ξ. In this case,
we obtain values of

γ δ λ ξ
Mina & Ulmer 73.12335 11.23915 1 -0.0015
Table 21: Johnson fit parameters for Cap 1

The fit type obtained from the algorithm of Hill et al. is lognormal, indicating that the
portfolio kurtosis co efficient was consistent with a lognormal fit, given the portfolio skew
coefficient. In this case, the value of lambda is not required for the fit process.

Using (5.2), and the parameter values above, we can transform a simulated standard normal
variable into a simulation of the portfolio value. This allows us to calculate the moments of
the simulated portfolio value and check the quality of the fit.

Page 106
Chapter 5: Use of Johnson Transformation in RiskMetrics

Mina
µ1 4.45E-06
µ2 1.34E-04
µ3/σ3 0.164
µ4/σ4 2.99
Table 22: Moments of pdf for Cap 1 using Johnson Transform

The standard normal sample used for this validation itself has slight negative kurtosis, so
this explains the failure to match the fourth moment. The fit has reproduced the input
moments reasonably well, apart from this difference.

Once we are happy that the method has resulted in a good fit, we can calculate the VaR by
using a value of the standard normal distribution appropriate to the level of confidence we
require for the VaR estimate in (5.2).

5.7 Full test


To examine the performance of the delta-gamma-Johnson methodology, we use the
following test for a range of interest rate caps, with varying strike and expiry dates.

For a single date: given history, how well does the estimated distribution of the return of a
range of interest rate caps, as derived using the RiskMetrics methodology, match the
estimated distribution of the return obtained by performing valuations across the estimated
probability density function of the underlying prices.

Null hypothesis: distribution of the return of a range of interest rate caps, as derived using
the RiskMetrics methodology, is the same as the return distribution obtained by simulating
the cap returns using the underlying price as the simulation variable.

If the model performs well, then institutions can be confident when using RiskMetrics for
portfolios with high option content. If not, then institutions must use a simulation
approach for these portfolios.

Page 107
Chapter 5: Use of Johnson Transformation in RiskMetrics

Option
Data Option Algorithm Johnson Return
Moments
Yield Pricing 99 Function Distribution
Curve

cross-check

Option Simulated
Rate Data Volatilities Simulation Differences
pricing Returns
Smirnov
Test
Figure 7: Approach for test procedure

5.8 Test Statistic for Null Hypothesis


The process for assessing the quality of the fitting process is as follows:

1. Calculate Johnson parameters using steps described in previous section:

i) Calculate reference portfolio first and second order sensitivities to


movements in the yield curve

ii) Calculate moments of reference portfolio using Mina & Ulmer

iii) Fit curve to moments, using Hill, Hill & Holder algorithm

3. Simulate reference portfolio return as follows:

a) Simulate 10,000 returns of underlying

b) Price portfolio for simulated data

c) Order portfolio price data points

4. Calculate moments for simulated data set

5. Cross check Mina & Ulmer moments against calculated moments

6. Invert Johnson function to obtain Johnson transform of standard normal variable.

7. Simulate reference portfolio return as Johnson transform of 10,000 simulated


standard normal variables

8. Plot cumulative probability for simulated return and Johnson model return

9. Perform Kolmogorov-Smirnov two sample test as follows:

Page 108
Chapter 5: Use of Johnson Transformation in RiskMetrics

a) For each point on the model return plot, interpolate the corresponding
cumulative probability from the simulated data set.

b) Subtract the simulated cumulative probability from the cumulative


probability on the Johnson curve.

c) The test statistic is the largest unsigned difference between the two
cumulative probabilities.

d) Compare the test statistic to a lookup table (e.g. table 16 in Kendall &
Stuart (1977)). In the limit, the lookup value tends to (no of std deviations
for confidence limit/square root of no of observations).

5.9 Test Details


For the experiment, a range of caplets was tested. See Appendix 1 for details of the
example trades. The caplets were in the money, at the money, and out of the money, and
varied between three months and one year in maturity. The first step was to calculate the
Johnson parameters using the moments of the caplets.

These values were derived using a Black 76 (Hull book) model. The discount factors were
calculated using a straightforward cash/swaps bootstrap.

Mina & Ulmer’s equations were used to calculate the moments of each cap.

Simulation
The simulation followed the Monte Carlo method set out in chapter 7 of the RiskMetrics
Technical Document. The underlying rates of the US Dollar interest curve, corresponding
to a range of maturities, were sampled using a Monte Carlo engine. The engine used a
variance-covariance matrix, representative of USD interest rates in 1996, to generate joint
distributions for the rates. This matrix is the same one reproduced for the worked example
in section 5.6.

Kolmogorov-Smirnov test
This test will take as input two sampled distributions. The first input, in this case, is the
option distribution calculated from the Monte Carlo simulations of the US Dollar yield
curve. The second is the return distribution from the inverted Johnson transformation of
the simulated standard normal variable54. All 10,000 data points on the Johnson

54 The algorithm for the inverse transform is exactly as Algorithm 100 in Hill (1985)

Page 109
Chapter 5: Use of Johnson Transformation in RiskMetrics

transformation are compared to the simulated option return distribution. For the sections
of the distributions that overlap, this return was matched to a cumulative probability from
the simulation results, using a linear interpolation function on the simulation table55. The
maximum difference between this probability and the original percentile is the
Kolmogorov-Smirnov 2-sample statistic (k-statistic). This is a test that compares two
sample distributions, in this case the simulated option return and the transformed
distribution. Cumulative frequency charts for the example distributions are given at the end
of this chapter. Table 16 in Kendall & Stuart (1977) gives the confidence limit of the
statistic for various sample sizes. If the multiplier for the confidence limit is z, and the
z
number of observations is n, in the limit, the lookup value tends to . In these results, a
n
1 .92
significance level of 95% was used, and n is 10,000, so this was calculated as , i.e.
10000
0.0192. Hence, we reject the hypothesis that the distributions are the same if there is any
value of the return for which the difference between the cumulative probability of the
simulated distribution and the cumulative probability of the inverted Johnson transform
differ by 0.0192.

5.10 Results of Experiment


Sixteen caps were processed. In addition, all the caps are combined together in a portfolio,
with each cap given a weighting of 1/t, where t is the period for which the strike rate
applies, expressed using the basis of the strike rate, so that the theoretical option returns
can be summed to provide a total return for the portfolio. The Johnson fit process is
applied to the return of the portfolio. Using the Mina & Ulmer approach to calculate
moments, 10 of the 16 caps passed the significance test. The method also produced a
transform for the portfolio that passed the significance test. The results of the fitting
process for the individual caps and the portfolio are presented in Appendix 2.

The JP Morgan calculated moments, µ, σ, √β1 and β 2, correspond to the mean, standard
deviation, skewness coefficient and kurtosis coefficient, calculated for each cap using the
method outlined in section 5.6. The corresponding moments from the simulation are
calculated using Excel worksheet functions. If the JP Morgan results are accurate, and the
simulation runs without bias or discrepancy, these moments should match.

55 The use of 10,000 data points for the simulation makes it possible to justify a linear interpolation method

Page 110
Chapter 5: Use of Johnson Transformation in RiskMetrics

The Johnson fit parameters, δ, γ, λ, ξ are outputs from the Hill et al algorithm. λ is a
redundant parameter for the lognormal fit, and is set to one by the fitting algorithm.

5.11 Analysis
Caps 1-3 form a strip of short-dated, at the money caps. These caps should have quite high
gamma, and so we would expect the non-linear approximation to be stretched, per the
previous research, and the results to reflect this. We would expect the quality of fit to
improve as the maturity increased.

Caps 4-6 are a strip of short-dated, out of the money caps. The pay-off curve would
essentially be linear for these caps. We would expect the fitting process to perform better
for these caps than the at the money caps. The higher volatilities for out of the money caps
should not make any difference, since the options are so far out of the money, the volatility
has little chance to have an effect.

Caps 7-9, 10, 12 and 13-15 represent strips of out of the money caplets, struck at a
common level, less deeply out of the money than Caps 4-6. We would expect to see a
gradual worsening in the quality of fit as the caps get more in the money for a given
maturity. As maturity varies on the strip the relationship becomes more complex. We
expect long maturity caps to have less gamma, but the yield curve rises with maturity, so
the long maturity caps are closer to being in the money and may have higher gamma than a
shorter dated cap, more out of the money. In this case, maturity is the dominant factor and
the longer-dated caps have less gamma. We should expect the Johnson fit to perform
better for these caps.

Cap 16 is a short-dated cap, which we expect to be the greatest challenge for the
methodology. Slightly in the money, the cap has the highest gamma risk of any of the test
caps, and would represent the biggest practical problem for the risk manager.

Cap 17 is a long dated cap, with maturity of over two years. Although it is currently just in
the money, the gamma is moderate. We would expect the Johnson fit to provide good
results with this cap.

Page 111
Chapter 5: Use of Johnson Transformation in RiskMetrics

0.2
Cap016

0.18

0.16

0.14

0.12

0.1 Mina

0.08

0.06

0.04

0.02 Reject
Ho

0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
Strike Ratio

Figure 8: K-statistic vs Option Strike/Underlying Forward

The first assertion we have made is that options that are near the money will perform
poorly in the fitting process. All of the options that are at or near the money have
moderate to high levels of gamma. Figure 8 depicts the relationship between moneyness,
measured as the strike ratio (strike price over forward price) and fit quality (k-statistic).
Contrary to our expectation, the best fits seem to be when the option is at the money. The
exception is Cap 16, for which the choice of a normal distribution has led to a poor fit.
Good fits are also obtained with the Mina & Ulmer moments when options are well out of
the money. These options are among those with the lowest gamma.

Page 112
Chapter 5: Use of Johnson Transformation in RiskMetrics

0.2

Cap016
0.18

0.16

0.14

0.12

0.1 Mina

0.08

0.06

0.04

0.02 Reject
Ho

0
0 0.5 1 1.5 2 2.5
Expiry (years)

Figure 9: K-statistic vs Option Expiry

Our second assertion is that maturity is dominant in influencing the gamma, and will thus
be dominant in determining the quality of the fit. Figure 9 depicts the relationship between
maturity and fit quality. We can see that maturity is not dominant in determining the fit
quality. For each maturity, we have had variable success in matching the simulated curve.
Indeed, were we to plot the quality of fit against gamma itself, we would still get a rather
confused picture.

Page 113
Chapter 5: Use of Johnson Transformation in RiskMetrics

0.2
Cap016

0.18

0.16

0.14

0.12

0.1 Mina

0.08

0.06

0.04

0.02 Reject Ho

0
-0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 1.2
Skew

Figure 10: K-statistic vs Reference Portfolio Skew

To address this confusion, we look at a more immediate source of variance: the skew of
the simulated distribution. Figure 10 shows fit quality as a function of the skew calculated
from the option simulation. For the moments calculated using the Mina approach, there is
some evidence that a quadratic fit could be performed.

Page 114
Chapter 5: Use of Johnson Transformation in RiskMetrics

0.2

Cap016
0.18

0.16

0.14

0.12

0.1 Mina

0.08

0.06

0.04

0.02 Reject
Ho

0
-0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
Skew
Figure 11: K-statistic vs calculated portfolio skew

The skew of the simulation is not the only skew we have calculated in the fitting process.
We have also derived a skew for the option from the Mina equations, and we can also
show the relationship between the quality of fit and that skew. Figure 11 shows the
relationship between quality of fit and the calculated skew used for the fit. Again we see the
quadratic relationship for the Mina calculated moments. A similar relationship exists for
the calculated kurtosis.

Page 115
Chapter 5: Use of Johnson Transformation in RiskMetrics

0.2

Cap016
0.18

0.16

0.14

0.12

0.1 Mina

0.08

0.06

0.04

0.02 Reject
Ho

0
0 1 2 3 4
Type

Figure 12: K-statistic vs transformation type

We suspect that the actual family of fit chosen may be more important to the success of
the fit than any of the other factors. Figure 12 shows the quality of fit versus the type of fit
chosen. From this, we can see that type 1 (lognormal) fits are, for this set of options, more
successful than type 3 (bounded) or type 4 (normal).

A linear regression analysis shows that there is some evidence of a complex relationship
between the K-statistic and these candidate dependent variables. The table below shows
the results of the regression analysis.

Coefficients Standard t Stat


Error
Intercept -0.25961 0.017243 -15.0562
Strike ratio -0.02322 0.008218 -2.82566
Expiry (t) 0.145665 0.015057 9.673938
Volatility (σ) 3.306792 0.342236 9.662311
σt -2.6288 0.314332 -8.36315
Gamma 0.000505 2.01E -05 25.08081
Kurtosis 0.057186 0.006235 9.172198
Table 23: Linear regression of Kolmogorov-Smirnov statistic on dependent variables

These values produce an R2 for the regression of 0.61. The 95% confidence intervals for
the coefficients of the dependent variables are wide, as the standard error is relatively large.
Taken with the low R2, we conclude that we should not rely on this analysis as a
fundamental explanation of the dependencies for the fit. We can see from Figures 8-12 that
Cap 16 in any case dominates the regression.

Page 116
Chapter 5: Use of Johnson Transformation in RiskMetrics

Figure 13: Cumulative frequency of returns using delta-gamma-Johnson and simulation: Cap 7

Figures 13 through 16 show the simulated and RiskMetrics method calculated S-curves.
Figure 13 shows Cap 7, a short dated cap near maturity, which we expected to perform
poorly, due to its high gamma. In fact, the plots are almost on top of one another, with just
a small deviation in the tails, which is the source of the K-statistic of 0.012 using the Mina
and Ulmer moments.

Page 117
Chapter 5: Use of Johnson Transformation in RiskMetrics

Figure 14: Cumulative frequency of returns using delta-gamma-Johnson and simulation: Cap 16

Figure 14 shows the S-curves for Cap 16, the short dated cap with high gamma. In this
case, the fit has chosen a normal parameterisation, probably because the low skew and
kurtosis coefficients are below the tolerance settings in the fitting algorithm. However, we
see from the S-curves that the choice of normal fit type leads to a very poor fit with the
simulated distribution and a k-statistic of 0.189.

Page 118
Chapter 5: Use of Johnson Transformation in RiskMetrics

Figure 15: Cumulative frequency of returns using delta-gamma-Johnson and simulation: Cap 10

In figure 15 we see the S-curves for Cap 10, for which a bounded fit has been obtained.
In this case, the deviation is not so marked, and the fit, although irregular, roughly
follows the shape of the simulated returns cdf. Although the fit has failed the
significance test, the option value is not very volatile, so the failure does not matter for
the risk of the option itself. The option has a value of 12bp, but the VaR numbers are
approximately 0.2bp using the Mina & Ulmer approach or the simulation.

Page 119
Chapter 5: Use of Johnson Transformation in RiskMetrics

Figure 16: Cumulative frequency of returns using delta-gamma-Johnson and simulation: portfolio

Figure 16 shows the S-curves for the portfolio of options as a whole. We can see that,
although the method has failed to fit individual options within the portfolio, it has led to a
good fit to the distribution of the portfolio when taken as a whole.

5.12 Conclusion
This chapter set out to investigate the effectiveness of the Johnson fitting process in
managing non-linear risk. For a range of interest rate options, most with significant gamma
values, we applied the RiskMetrics methodology to obtain the option cdf. By comparing
this to a cdf obtained by simulating the returns directly, we were able to test the quality of
the fitting process. We saw that, for many of the options, the fitting process results in a
cdf that is not significantly different to the simulated return. However, for some options,
particularly those with a higher skew, or those with a very small skew, or those with a
kurtosis out of line with the skew (forcing a bounded fit), the distribution is significantly
different. The most extreme example in our test data was Cap 16. We expected this cap to
be difficult to manage, owing to short maturity of ten days, and the nearness of the
underlying forward price to the strike price, which in combination leads to a high gamma
value. In the fit process, it appears that the calculated skew was so small that the fitting
algorithm chose a normal distribution, even though this was inappropriate for the levels of

Page 120
Chapter 5: Use of Johnson Transformation in RiskMetrics

kurtosis that the option return distribution possesses. The result was a poor fit to the tails
of the distribution.

We looked at the effectiveness of the RiskMetrics methodology in measuring the risk of


the entire portfolio of options. For our portfolio, the effects of skew and kurtosis, which
had caused problems with individual positions, became less severe, and the method
produced a good fit to the simulated return cdf. In this case, the risk manager would have
been justified in using the RiskMetrics delta-gamma-Johnson method to model the
portfolio risk. It is likely that there are many practical portfolios for which this is true.
However, it would be straightforward to construct a multiple-position portfolio from our
sample options, or similar trades, which goes to a type 4 fit. In this case, the VaR of the
portfolio could be overstated, possibly leading to a higher regulatory charge.

For the risk manager with a non-linear portfolio, clearly it is wise to look at the
composition of the portfolio before using the RiskMetrics method. Some analysis, similar
to the analysis in this chapter, would establish whether the approach would suit the current
make-up of the portfolio. By adding stress trades, the risk manager could determine the
limits of the approach in the context of the portfolio, and thereby establish limits on
trading activity that reflect the risk management constraint.

Page 121
6 CONCLUSIONS
Since the G30 report on the trading of derivatives, Value at Risk has grown in popularity as
a way of assessing the risks within a trading activity. The Value at Risk concept is now
enshrined in company reports. RiskMetrics has become the dominant off the shelf market
risk management methodology. Risk managers actively use the RiskMetrics methodology
to estimate the Value at Risk for the portfolios they oversee. More complex portfolios,
such as those containing options, require a significant effort in computer hardware to
obtain the daily VaR estimate needed.

RiskMetrics contains a mechanism for estimating the VaR of these portfolios, without
carrying out a large number of simulations. The implication is that VaR estimates can be
obtained more cheaply, or in a shorter amount of time. However, there is little information
publicly available that either supports or cautions on the use of this approach. There is
some research that suggests that it is difficult to use reliably, but no results to support this
claim.

In the course of this thesis, we have developed a procedure that can be used to assess the
quality of the methodology in capturing the non-linear exposure to the underlying rates and
prices, for any portfolio of assets that can be priced, for which we can obtain a covariance
matrix. These conditions are also required to implement the methodology successfully. The
procedure uses a standard statistical test to validate the methodology at any desired level of
granularity. We applied the procedure to a very limited set of test cases, with mixed results.
Many of the test cases are captured very well by the RiskMetrics approach. Others are not
captured well, but the cash value of any discrepancy is likely to be small in the context of
portfolio VaR.

The outcome of the research is that there are some non-linear portfolios for which the
methodology is adequate, and some for which the methodology will overstate risk. The test
cases are too limited to make any statement about the proportion of real life portfolios that
the methodology may cover. The outline of further research below considers other test
cases, which would allow a more comprehensive statement to be made.

The research has not attempted to determine the sufficient characteristics that a portfolio
must have for the VaR to be significantly overstated. The research does show that
excessive skew or kurtosis may tend to lead to a bad fit, but did not derive any simple tests
Chapter 6: Conclusions

that can be used by the risk manager in preference to following the full test procedure for
the portfolio. The outline of further work below describes two approaches that may
achieve this goal.

The research has not focussed on the VaR values themselves. Commonly, the quality of a
VaR methodology is assessed using back testing. This requires the predicted VaR to be
plotted against the portfolio P&L. The outline of further work includes more detail on this
procedure.

6.1 Further work


This research can be extended along several lines. Firstly, the test methodology can be
applied to a wider range of circumstances, including those set out below. Secondly, other
areas of research will address some outstanding questions of interest.

Structured trades
The portfolio in our test case was entirely made up of long call options. For some financial
institutions, this may be a sufficient test, but most have more complex trading portfolios
made up of long and short positions in calls and puts, with exotic features like barriers.
Often these trades have further structure to them, which makes the overall payoff curve of
the portfolio quite irregular. Further research would investigate whether these types of
transactions are suitable for the RiskMetrics Delta-Gamma-Johnson approach, either at the
structure or the portfolio level. The research would apply the existing methodology and
data to these trades.

Other underlyings
The test portfolio was dependent only on USD interest rates. The research could be
extended to cover additional asset types, such as other currency interest rates, FX rates,
stock prices and commodities. For this research, the approach would be largely as set out
above, with the additional step of obtaining market data and a covariance matrix to cover
all the assets in the portfolio. Covariance matrices are available without cost on the
RiskMetrics website, with a six month lag. The price process for other markets may be
different to USD (less normal, for instance). The researcher could study the effect that this
has on the test portfolio. This could lead the risk manager to mandate different levels of
trading in different types of underlying.

Page 123
Chapter 6: Conclusions

Vega and theta risks


The RiskMetrics methodology could be extensible to vega and theta risks. Further research
could determine the form of the moment equations to include these risks. The test process
would also need to be extended to incorporate the additional risk factors in the simulation
of the portfolio, both when obtaining the sensitivities required for the moment calculation,
and when obtaining the cdf of the test portfolio through full revaluation (and delta-gamma-
vega-theta approximation). The test methodology developed in this thesis assumes that
valid covariances can be obtained between all pairs of risk dimensions. If implied volatility
of an underlying is a risk dimension, the covariance treatment is less clear, and some
thought would have to go in to the production of the joint simulation results and the use
of the covariance matrix in the moment calculation.

Hybrid instruments
Hybrid instruments, such as convertible bonds than can be exchanged into stocks
according to the terms and conditions of the bond, are traditionally the most difficult to
price and risk manage. We would expect such a product to be a challenge for the
RiskMetrics methodology. New research could extend the framework established in Other
underlyings to include a test of convertible bonds. Market data would be required for both
stock prices underlying the convertibles and interest rates. Convertible pricing itself is a
significant task, so the existing Excel domain for the pricing would probably be
undesirable, unless a very powerful PC was being used (or fewer simulations were
undertaken for the Kolmogorov test). If results from Vega and Theta Risks are available,
these could be incorporated into the test methodology. The researcher would have to
decide whether or not to add the complication of using the RiskMetrics one-factor model
for stock prices. This reduces the requirement for market data, but may introduce an
additional source of discrepancy.

Regulatory VaR analysis


One of the big motivations for a risk manager is whether the regulator will accept a risk
management approach for the portfolio. In the UK, the FSA mandates performance bands
for VaR models, representing the number of times that actual P&L exceeds the VaR.
These bands influence the capital charge that results from the VaR model, and even
determine whether or not the model can be used. For any of the test portfolios, further
research could establish which band the VaR methodology falls into.

Page 124
Chapter 6: Conclusions

To carry out this research, time series data is required for all rates and prices used to value
the portfolio. Two years and three months data is required to follow the recommendations
of the regulator. The research would establish the VaR estimate for the portfolio on each
day of the two year period. The research would also value the portfolio on the day
following the VaR estimate. The number of P&L results in excess of the VaR estimate
dictates the banding of the methodology.

Forcing the fit process


For our test data, lognormal fits performed better than normal fits or bounded fits. Further
research could establish whether the fitting process can be tailored to always return a
lognormal fit, and whether the results obtained are superior to the normal or bounded fits.
The lognormal fit can be forced by adjusting the input kurtosis – effectively we are fitting
to the first three moments only of the distribution. The test procedure is otherwise
identical to the procedure we have used in Chapter 5. The results will show how important
kurtosis is to the quality of the fit.

Fitting the tail


Bounded fits for our test data deviated most significantly in the tails of the distribution.
However, it is the tails we are interested in, in particular the lower tail. This was the basis of
a previous treatment of non-linear risk in RiskMetrics, the Cornish-Fisher expansion. The
expansion fits a polynomial to the tail of the distribution. The Cornish-Fisher expansion
has some undesirable properties that make it of limited use 56, but it is possible that we
could modify the Johnson fit algorithm so that the fit for the tail of the distribution is
good. The starting point for this research would be to look at the different types of fit, and
to understand more closely how the parameters obtained for the fit relate to the shape of
the resultant cdf.

56 See Mina & Ulmer (1999)

Page 125
Chapter 6: Conclusions

Further analysis on fit quality


We have attempted in this thesis to show some relationships between portfolio
characteristics and the quality of the fit. A useful tool for the risk manager would be a quick
test that can be carried out on the portfolio, without the need to follow our full test
procedure. For example, if there was a threshold level of gamma, beyond which the fit will
not work, the risk manager will know immediately that there is no option but to use full
simulation. In fact, this research found that the relationship between gamma and fit quality
was more complex than this, and it is likely that several other variables may be involved.
One approach would be to return to the portfolio and the moment calculations, and
selectively stress portfolio parameters until a good fit goes to bad, or vice versa.

6.2 Summary
The test methodology described in this research requires some effort on the part of the risk
manager, firstly to collect market data, then to determine covariance matrices, and finally to
use this information to perform the Johnson fit and model the portfolio. While this effort
is not without its challenges, these would in most cases have to be overcome in
implementing the risk management procedure. The test provides the option of validating
the methodology, before significant systems costs have been incurred implementing the
methodology.

The risk manager using RiskMetrics thus has a tool with which to analyse the suitability of
portfolios for the non-linear RiskMetrics Delta-Gamma-Johnson approach. While there are
still practical problems in applying this tool to large portfolios, many of these problems
have to be solved in any case in the course of the RiskMetrics implementation.

Page 126
APPENDICES
APPENDIX 1

Deal Data

Caplet start Caplet end Strike rate Forward


date date volatility
(annualised)

1. 30-Dec-96 27-Mar-97 5.992% 9.000%


2. 27-Mar-97 30-Jun-97 6.133% 12.000%
3. 30-Jun-97 30-Sep-97 6.364% 15.000%
4. 30-Dec-96 27-Mar-97 8.000% 12.000%
5. 27-Mar-97 30-Jun-97 10.000% 15.000%
6. 30-Jun-97 30-Sep-97 12.000% 20.000%
7. 30-Dec-96 27-Mar-97 6.500% 10.000%
8. 27-Mar-97 30-Jun-97 7.000% 13.000%
9. 30-Jun-97 30-Sep-97 8.000% 18.000%
10. 30-Dec-96 27-Mar-97 7.000% 10.500%
12. 30-Jun-97 30-Sep-97 7.000% 16.000%
13. 30-Dec-96 27-Mar-97 7.500% 11.500%
14. 27-Mar-97 30-Jun-97 7.500% 13.500%
15. 30-Jun-97 30-Sep-97 7.500% 17.000%
16. 10-Oct-96 10-Jan-97 6.350% 9.000%

All rates are quoted on Actual/365 basis

Page 128
APPENDIX 2

Results of Experiment
(Mina & Ulmer Methodology)

JP Morgan Calculated Moments Simulation Johnson fit Kolmogorov-Smirnov Test


Test no µ σ √ β1 β2 µ σ √ β1 β2 Type δ γ λ ξ K-stat Accept 95%
CAP001 4.45E-06 0.000134 0.168677 3.042739 2.35E-06 0.000134 0.106381 0.03214 1 107.6635 17.81835 1 -0.00238 0.008535 FALSE 0.005709
CAP002 4.98E-06 0.000183 0.15981 3.034573 3.4E-06 0.000183 0.112585 0.034693 1 106.7262 18.80347 1 -0.00343 0.007514 FALSE 0.003782
CAP003 1.2E-05 0.000241 0.267613 3.103771 5.38E-06 0.000241 0.135385 0.046873 1 66.59544 11.26197 1 -0.0027 0.010895 FALSE 0.007803
CAP004 7.41E-07 2.1E-05 0.180273 3.04868 3.64E-07 2.1E-05 0.104831 0.02311 1 132.7446 16.67641 1 -0.00035 0.008766 FALSE 0.006323
CAP005 -2.7E-07 1.04E-05 -0.15461 3.032361 -1.9E-07 1.04E-05 -0.1086 0.03418 1 165.3402 19.43369 -1 0.000202 0.007409 FALSE 0.004607
CAP006 -1.3E-07 1.99E-06 -0.36246 3.190305 -5.9E-08 1.99E-06 -0.18143 0.070451 1 91.94394 8.346689 -1 1.64E-05 0.013115 FALSE 0.013061
CAP007 3.58E-06 6.15E-05 0.295338 3.130892 1.86E-06 6.14E-05 0.183019 0.065887 1 75.3904 10.21497 1 -0.00062 0.011529 FALSE 0.009209
CAP008 3.38E-06 5.01E-05 0.39426 3.2106 2.25E-06 5.01E-05 0.272722 0.126029 1 60.51545 7.685014 1 -0.00038 0.010489 FALSE 0.008575
CAP009 4.64E-06 3.83E-05 0.648916 3.610137 1.99E-06 3.8E-05 0.320132 0.183648 1 41.0317 4.74564 1 -0.00018 0.025581 TRUE 0.023907
CAP010 2.43E-06 1.49E-05 0.823563 4.018369 1.24E-06 1.46E-05 0.513235 0.393325 3 4.75221 2.538332 0.000314 -4.1E-05 0.030919 TRUE 0.030326
CAP012 9.88E-06 0.000126 0.419762 3.25519 4.34E-06 0.000126 0.208986 0.088399 1 50.67778 7.227504 1 -0.0009 0.015493 FALSE 0.014319
CAP013 1.46E-06 8.25E-06 0.889017 4.191655 7.38E-07 8.06E-06 0.54717 0.394444 3 4.579653 2.381001 0.000169 -2.1E-05 0.034605 TRUE 0.033846
CAP014 1.99E-06 1.16E-05 0.992361 4.343714 1.31E-06 1.14E-05 0.695483 0.691313 3 3.26887 1.875473 0.000162 -2.4E-05 0.042427 TRUE 0.03195
CAP015 6.99E-06 7.1E-05 0.528818 3.405033 3.04E-06 7.06E-05 0.261532 0.129027 1 45.16004 5.77386 1 -0.0004 0.020133 TRUE 0.018603
CAP016 1.69E-07 0.000116 0.004451 3.000051 2.03E-07 5.14E-05 0.022745 -0.08266 4 -0.00146 8611.528 0 0 0.188826 TRUE 0.099207
CAP017 -2.7E-06 0.000128 -0.09092 3.015039 -5.5E-07 0.000129 -0.02655 0.110568 1 180.3849 33.01622 -1 0.004238 0.009694 FALSE 0.007333

129
REFERENCES
Alexander, C. (1996), Evaluating The Use of RiskMetrics As A Risk Measurement Tool For Your
Operation: What Are Its Advantages And Limitations?, Derivatives: Use, Trading and
Regulation 2:3 pp277-285
Alexander, C. & C. Leigh(1997), On the Covariance Matrices used in Value-At-Risk Models,
Journal of Derivatives 4:3 pp50-62
Allen, M. (1994), Building a Role Model, Risk 7 (September), 73-80
Arthur Andersen (2001), Risk Management: An Enterprise Perspective (FEI Research Foundation
Andersen Survey)
Artzner P., F. Delbaen, J. Eber & D. Heath (1999) Coherent Measures of Risk , Mathematical
Finance 9, 203-208
Bank of England (1995), Draft Regulations to Implement the Investment Services and Capital
Adequacy Directives.
Basle Committee on Banking Supervision (1988), International Convergence of Capital
Measurement and Capital Standards (Basel Committee Publications no. 4),
http://www.bis.org/publ
Basle Committee on Banking Supervision (1993), Measurement of Bank’s Exposure to Interest
Rate Risk (Basel Committee Publications no. 11), Bank for International Settlements
Basle Committee on Banking Supervision (1996), Amendment to the Capital Accord to
Incorporate Market Risks (Basel Committee Publications no. 24), http://www.bis.org/publ
Bishop, M. (1996), A survey of Corporate Risk Management: Too hot to handle? – A new nightmare
in the boardroom, The Economist
Britten-Jones, M. & S. Schaefer (1999) Non-Linear Value at Risk, European Finance Review
22
Cardenas J., E. Fruchard, E. Koehler, C. Michel & I. Thomazeau (1997), VaR: One Step
Beyond, Risk 10 10 pp 72-75
Chaumeton, L., G. Connor & R. Curds (1995), Worldwide Factors in Stock and Bond Returns,
BARRA International
Chew, E. (1994), Shock Treatment , Risk 7 9, 63-70
Collins, B., The Whence, How And Why Of OTC Equity Derivatives,
http://www.wcsu.ctstateu.edu/finance/newsletter/nlfall98.htm
Crnkovic, C. & J. Drachman (1996), Quality Control, Risk 9 9, 138-143
Dunbar, N. (2000), Inventing Money, Wiley
Dupire B. (Ed.) (1999) Monte Carlo Simulation: Methodologies and Applications for Pricing and Risk
Management, Risk Publications
Elton, E. & M. Gruber (1995), Modern Portfolio Theory & Investment Analysis (5th Ed), Wiley.
Fabozzi, F. (1993), Fixed Income Mathematics: Analytical and Statistical Techniques. Probus
Falloon, W. (1995), 2020 Visions, Risk 8 10, 43-45
FSA (1998, 1999), Guide to Banking Supervisory Policy, Financial Services Authority.
Garman, M. (1996) Improving on VaR, Risk 9 5 61-63
Glass, G. (1996), untitled conference paper, Risk 96.
Glasserman, P., P. Heidelberger & P. Shahabuddin (2000a), Efficient Monte Carlo Methods for
Value-at-Risk , IBM Research Division
Glasserman, P., P. Heidelberger & P. Shahabuddin (2000b), Portfolio Value-at-Risk with
Heavy-Tailed Risk Factors, IBM Research Division
Global Derivatives Study Group (1993), Derivatives: Practices and Principles, Group of Thirty
Hendricks, D. (1996), Evaluation of Value at Risk Models Using Historical Data, Federal
Reserve Bank of New York Economic Policy Review 2 (April), 39-70
Griffiths, P. & I. Hill (Eds.) (1985), Applied Statistics Algorithms, Royal Statistical Society

130
Hull, J. and A. White (1993), Options, Futures & Other Derivative Securities (2nd Ed.), Prentice-
Hall
Jorion, P. (1997) Value at Risk: The new benchmark for controlling market risk , Irwin Professional
JP Morgan (1994), RiskMetrics Technical Document v2, Morgan Guaranty Trust Company of
New York, New York
JP Morgan (1995), RiskMetrics Technical Document (3rd Ed), Morgan Guaranty Trust
Company of New York, New York
JP Morgan (1996), RiskMetrics Technical Document (4th Ed), Morgan Guaranty Trust
Company of New York
JP Morgan (1996 a), RiskMetrics Monitor, Q3 1996
JP Morgan (1997), Introduction to CreditMetrics, Morgan Guaranty Trust Company of New
York
Keating, C. & W. Shadwick (2002), A Universal Portfolio Measure, The Financial
Development Centre, London
Kendall, M, Sir & A Stuart (1977), The Advanced Theory of Statistics, Vol 1 (4 th Ed.), London:
Griffin
Laubsch, A. & A. Ulmer (1999), Risk Management: A Practical Guide, RiskMetrics Group
Lawrence, C. & G. Robinson (1995), How Safe is RiskMetrics?, Risk 8 (January), 26-29
Li, D. (1999), Value at Risk Based on the Volatility, Skewness and Kurtosis, RiskMetrics Group
Mina, J. & A. Ulmer (1999), Delta-Gamma Four Ways, RiskMetrics Group
Mina, J. & J. Yi Xiao (2001), The Evolution of a Standard, RiskMetrics Group
Pafka, S. & I. Kondor (2001), Evaluating the RiskMetrics Methodology in Measuring Volatility and
Value-at-Risk in Financial Markets, working paper
Pritsker, M. (1997) Evaluating Value at Risk Methodologies: Accuracy versus Computational Time,
working paper
SFA (2000), Board Notice 545, The Securities and Futures Authority
Turner, C. (1996), VaR as an Industrial Tool, Risk 9 (March), 38-40
Wilson, T. (1996), Calculating Risk Capital, Handbook of Risk Management and Analysis
(ed. C Alexander), Wiley
Wonnacott, T. & R. Wonnacott (1990), Introductory Statistics for Business & Economics (4th
Ed), Wiley

Note that JP Morgan (1996) is referenced so commonly in the thesis that it is commonly
abbreviated to [TD4].

131

Vous aimerez peut-être aussi