Vous êtes sur la page 1sur 5

The hidden dangers of Historical Simulation

Introduction

Nowadays Historical Simulation is a well-known method, used for many companies or financial
institutions to calculate the VaR of their portfolio. However this paper tries to show that people do
not really know the properties of this method and has also many limitations.

Value-at-Risk

VaR allows these financial institutions to calculate the maximum lose that the portfolio is likely to
lose over a given time horizon. This risk measure because of its simplicity has been adopted for
regulatory purposes. Because of that financial institutions are also really interested in calculating it.
But in practice sometimes computing VaR is not that easy because it depends on the joint distribution
of all of the instruments in the portfolio. That is the reason why one of the most important
assumptions in order to compute it is the choice of the distribution of the risk factors. One well-
known method to do it is the historical simulation.

There is a main difference with this method and is that assumes that the distribution changes in value
of todays portfolio can be simulated by making draws from the historical time series of past changes.

There exist two main limitations in this method which can explain why there was almost not response
in VaR to the crash on October 19th.

Asymmetric changes in risk: This method just measure risk increases when the portfolio
has large losses, but it does not behave in the same way when earn large gains. So, there
exist an asymmetric relationship between conditionalvolatility and equity index return:
conditional volatility increases more when index returns fall than when they rise. (See
explanation more detailed in the following section)

Conditional volatility:Limitation in one if the assumption and it assumes that the


historically simulated returns are i.i.d. through time because the empirical cdf assigning
an equal probability weight of 1/N to each days return. When in real volatility changes
of asset returns change through time.

BRW Method

This method by comparison is less unrealistic than the Historical Simulations, because assigns a
higher amount of probability weight to returns from the recent past through the exponential decay
factor(must sum up 1). So, it can be seen thatHistorical Simulation is a special case of BRWs
approach. And thanks to that with this method wecan see a substantially increase in the VaR for the
crash on October 19th.

But here there exists also a big limitation because for the same crash, the increase is just for long
portfolio, but for short portfolio does not increase at all. Because it is focused on non-parametrically
estimating the lower tail of the P&L distribution. Both methods assume that whatever happens in the
upper tail of the distribution, such as a large increase in P&L, contains no information on the lower

Page 1
tail of P&L. This means that large profits are never associated with an increase in the changes of
returns using either method. In the case of the crash, the short portfolio happened to make a huge
amount of money on the day of the crash.So, it cannot associate increase in P&L with increases in
risk, especially for large positive/negative returns which can indicate an important increase of the risk
of the portfolio.Both methods are as it was said before under-responsive to changes in conditional
risk.

Main Properties

In order to show how the VaR estimation respond to changes in the underlying riskiness of the
environment, the author chooses an estimator method with same properties, but easier to prove.
Although is not exactly like BRWs method, becausethis last one smooths the discrete distribution
in the above approaches to create a continuous probability distribution. VaR is then computed
using the continuous distribution.

This first property shows that the BRW VaR estimate for t+1 will not be greater than at time
t, if losses at t are not bigger than the BRW VaR at time t.

When returns follow a GARCH(1,1) process as in the following equations:

(1)

(2) *ht is at its long run mean

Then,

where is the probability that a standard normal random variable is less than x.

These first two propositions together can show that if the VaR estimate is near the long-run average
value of VaR, then VaR should increaseabout 32% of the time. But in practice it will just increase c%
of the time. So the BRW not respondingto certain increases in VaR depend on how much VaR is
likely to have increased over a single time period without being detected.

When returns follow a GARCH(1,1) process, htis its long run mean, and y(c, t), the VaR
estimate for confidence level.Then, the probability that VaR at time t+1 is at least x% greater
than at time t, butthe increase is not detected at time t+ 1 using the historical simulation or
BRW methods is:

The author through a complementary study on 10 different currencies shows that for those currencies
which their variances is near its long run mean, there exist a parameter which determines how much
of the increase in the true Var is not detected. There is a substantial probability, around 31%, that
increases in VaR will be undetected. Even most of the increases will be modest. There is a probability
of 4% that large increases in VaR will also be undetected.

Page 2
Also, VaR increase cannot be detected for many days in a row and the errors will accumulate become
much larger.

Simulation performance

The results of the simulation are quite close to the theoretical ones. But there exist also important
differences:

It can be seen that the correlation of the VaR estimates with true VaR is higher for the BRW methods
than for the Historical Simulation methods. This confirms that the methods move with true VaR in
the long run. However,the correlations of changes in the VaR estimates with changes in true VaR are
quite low. So, both are slow to respond changes in risk, and the average Root Mean Squared Error
(RMSE) across the different currencies is approximately25% of true VaR. It also appears that both
methods are conservative. However, the risks when VaR isunderstated are substantial.

Also examining how the VaR estimates track true VaR for the British pound example, true VaR and
VaR estimated using historical simulation and the BRW methods tend to move together over the
long-run, but true VaR changes more often than the estimates, and all three VaR methods respond
slowly to the changes. The result is that true VaR can sometimes exceed estimated VaR by large
amounts and for long periods of time.
Comparing true VaR against the VaR estimates, the errors seem to persist forlong periods, and
sometimes build up to become quite large.

Backtesting

In this paper the author defends the idea that backtesting methods have not enough power to detect
the problems with historical simulation methods. In order to show this the author uses a backtesting
method to check the conditional coverage or whether losses exceed VaR at the k percent confidence
level more frequently than k percent of the time. Both methods seem to perform well when measured
by the percentage of times that losses are worse than predicted by the VaR estimate. So, the
conclusion of the author is giving the bad performing of these methods unconditional coverage tests
have very low power to detect VaR methodologies.

Also in this paper it examines whether the VaR estimates are conditionally correct or thatlosses
exceeded today's VaR estimate should have no predictive power for whether losses will exceed VaR
in the future. VaR exceedance can be described by the event that losses exceeded VaR, so correct
conditional coverage is often tested by examining whether the time series of VaR exceedances is auto
correlated.

Computing the autocorrelation of the actual VaR errors, and of the series of VaR exceedances, for
1-day 1% VaR, and 1-day 5% VaR results show that both methods are slow to adjust to changes in
risk. The autocorrelation at a 1-day lag is about 0.95 for all three methods, and after 50 days for the
best method it remains around 0.1

In conclusion there exist a high autocorrelation between the VaR errors and a lower one for the
exceedances. So, the power of tests for correct conditional coverage, when based on exceedances is
very low.

Because of the fail results of both alternatives,and backtesting methods are really important methods
in order to check, the author proposes a new backtestingoption: Compare its estimates of VaR against
true VaR insinuations where true VaR is known

Page 3
Variance-covariance method

The author defends the idea that the variance-covariance method with exponentially declining weight
and equal weighting even neither this method or Historical simulation do a good job of capturing
conditional. But in comparison the first one performs better because the probability that increases in
VaR are not detected is generally low and the correlation of these measures with true VaR and with
changes in VaR is high.Also the variance-covariance methods recognize changes in conditional risk
whether the portfolio makes or loses money.

But because of the assumption of the normality distribution, BRW and Historical Simulations appear
to perform better in comparison

Filtered Historical Simulation

Is a new methodwhich captures either the conditional heteroskedasticity, by contrast to the Historical
Simulation ones, and also the non-normality of risk factors, by contrast to Variance- Covariance
method.It is a Monte Carlo based approach which is very similar to computing VaR using fully
parametric Monte Carlo.

For one single risk factor, FHS assumes the risk factor rt described by the GARCH (1,1) model
(equations (1)and (2)).and ht+1 as the conditional volatility of returns tomorrow.
Drawing the innovation t+1 with mean 0, variance 1 and i.i.d., plugging in the formula, we get rt+1.
Using this and ht+1 we generate ht+2 and simulating 10-day path generation thousands of times we get
a simulated distribution of 10-dayreturns conditional on ht

This model can also be extended to multiple risk factors, assuming N risk factors which follow a
GARCH process in which each factors conditional volatility is a f of lags of the factor and of lags
of the factors conditional volatility. And also assuming t, the N-vector of the innovations distributed
i.i.d. through time. As in the simple case, here also tare identified by estimating the GARCH models
for each risk factor and draws from the empirical distribution of tare made by randomly drawing a
date and using the realization of t for that date.

Even it is a multivariable case, is a simple extension which have two main good properties:

Simple volatility models (not need to estimate multivariable GARCH model to implement
them)

No need of estimation of the correlation matrix of the factors. Is modeled through


assumption that tis i.i.d.

But also there exist some limitations:

The assumption that volatility depends only on a risk factor's own past lags, and its own past
lagged volatility can be unrealistic.

The Assumption that et is i.i.d implies that the conditional correlation of the risk factors is
fixed through time. But in practice is not like that.

However,the author defends these limitations saying that can be fixed by complicating the modeling.

Page 4
In this paper a preliminary analysis of this method is done, and they are found two main problems:

The number of sample path in the HFS can be too small

The size of the historical sample used for the bootstrap is too small (two years of historical
data is not enough)

The new method has many good properties and allows for time-varying volatility but the distribution
of the risk factor innovations is modeled nonparametrically, but still it needs to continue developing
to improve their limitations in two different ways:

Continue studying on time-varying correlation

Establishing the number of years of historical data which are needed to produce accurate
VaR estimates. Because 500 days of daily data (two years) may not be enough to compute
the VaR at10-day horizon (too short to contain enough extreme observations)

Limitations and Review

For the empirical analysis of the Historical Simulations, the value of the VaR is unknown, so as the
authors says, this analysis is weak, because it can just measure indirectly. And is not either easy to
calculate the errors.

In the case of the artificial data, even if the true VaR is known. The analysis is also really limited
because it just examines the VAR for simple spot positions in underlying stock indices, so the only
error can be associated with the distributional assumptions and there is no possibility of choosing
incorrect risk factors, or approximating the nonlinear relationship between instrument prices and the
factors incorrectly.

Also using artificial data generated base on empirical series models. Sometimes these data cannot
reflect the reality. Hence, it may not be a really realistic model. As we have just seen there are some
important limitations in the analysis of this paper, even though it can still prove some of the
limitations that the Historical Simulations methods have. For example, the probability around 31
percent that increases in VaR will go undetected.

But the paper also offers a new approach which seems to be a new good method. But as the author
says, it needs to develop more in order to control its limitation that they have been already described.
But still can be a good method which can substitute the most used now, Historical Simulation.
Because as the paper has showed, there exist many limitations with this method and many people
ignore, for example the percentage of VaR increases not detected or the asymmetric changes in risk,
which makes a really limitated and weak method to calculate the VaR.

Page 5

Vous aimerez peut-être aussi