Académique Documents
Professionnel Documents
Culture Documents
Long Term Policy Making for Operational Risk management in the Basel II framework
University of Neuchatel
Room ALG Session 16 Duc PHAM-HI, Head of IT Dept., prof. Computational Finance,
31/03/2014
Duc Pham-Hi
Probability
Loss occurences
Probability
Probability
Mean
Threshold
Total loss
31/03/2014
Duc Pham-Hi
"Business losses" are frequent, and small, and : rather regular both positive and negative "Catastrophic losses" are rare and very large and unpredictable only negative ("no free lunch" effect)
31/03/2014 Duc Pham-Hi 5
dX1
X 1 dBt
X 2,0 exp Lt
X2
R*
(e z 1 z1 z 1 ) (dz) dt
~ (e z 1) N (dt, dz)
after application of Ito's formula, where : ~ N (dt, dz) N (dt, dz) dt (dz) is the compensated Poisson random measure, (dz) is the Lvy measure, with condition
e z 1 (dt , dz )
z 1
All operational losses Revenue income process Wealth is resultant of losses and gains
31/03/2014 Duc Pham-Hi
dX
dW
dX1 dX2
dR
dt
dR dX
6
Xt
1
xj
31/03/2014
Duc Pham-Hi
( )
where t is the loss reduction factor whose cost is expense (more fraud detection, personnel, etc.) Reducing impact of catastrophic events through insurance, or N (t ) recovery plans, at cost
G ( Lt ) K (x j )
0
where H ( ) H( ) 1
fixed amount
H ( ).x j
where 0
Duc Pham-Hi
Wealth evolution is :
N (t )
dW
Rt
) dt
X 1 ( ) ( )dBt
j 1
K (x j )
( ), t , , )
W
0
dW (t ,
J
where is the given of a pair ( (t) , (t)) .
0
exp( rt ) U W (t ) dt
Objective is to maximise:
V max E
0
exp( rt ) U W (t ) dt
31/03/2014
Duc Pham-Hi
31/03/2014
Duc Pham-Hi
10
rk
Introducing processes Optimal control gives the big picture Modeling processes Process decomposition Equation for risk genesis Introducing Value Optimal control equations Solving for strategies Solving for price of risk Feature based reasoning
31/03/2014
Duc Pham-Hi
12
31/03/2014
Duc Pham-Hi
14
31/03/2014
Duc Pham-Hi
15
31/03/2014
Duc Pham-Hi
16
at=
Then
(xt) ,
E r ( x, a)
R( x, a)
Taking null terminal value, the value function is the total of what can be expected in the future (here discount is not present).
V ( x, ( xt ), t )
t 0
r ( x, ( xt ), t )dt
V ( x)
E
t 0
r ( xt , ( xt )) x0
31/03/2014
Duc Pham-Hi
17
V ( x0 )
min V ( x0 )
A
r ( xt , ( xt ))
Value for a given strategy is sum of immediate reward and discounted flow of possible future rewards, depending on the transition : Vt 1 ( x) R( x, t 1 ) . Px , y .Vt ( y )
y
( x)
r ( xt , ( xt ))
31/03/2014
Duc Pham-Hi
18
by reasoning in terms of discrete time. Alternately, in terms of discrete states y, as possible outcomes of state x, and introducing action at :
V* min r (a, x)
a y
P( x, y )V ( y )
P ( x, y ) 1
y
x , P ( x, y ) 0
We iterate on V since the problem is linear. let t be the proxy for V at time t ; we iterate thus :
Vt 1 ( x ) min r ( x, ) .
y
P ( x, y )
( y)
Qt 1 (s, a)
31/03/2014
g ( s, a )
p(s, y).Qt ( y, a)
19
Duc Pham-Hi
31/03/2014
Duc Pham-Hi
20
31/03/2014
Duc Pham-Hi
21