Académique Documents
Professionnel Documents
Culture Documents
with Solutions and Code, supporting the 6-day intensive course ARPM Bootcamp
Attilio Meucci
attilio.meucci@arpm.co
Contents
0.1
1
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ix
Univariate statistics
E 1 Pdf of an invertible transformation of a univariate random variable (www.1.1) . . . . . .
E 2 Cdf of an invertible transformation of a univariate random variable (www.1.1) . . . . . .
E 3 Quantile of an invertible transformation of a random variable (www.1.1) . . . . . . . . .
E 4 Pdf of a positive affine transformation of a univariate random variable (www.1.2) . . . .
E 5 Cdf of a positive affine transformation of a univariate random variable (www.1.2) . . . .
E 6 Quantile of a positive affine transformation of a univariate random variable (www.1.2) .
E 7 Characteristic function of a positive affine transformation of a univariate random variable
(www.1.2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
E 8 Pdf of an exponential transformation of a univariate random variable (www.1.3) . . . . .
E 9 Cdf of an exponential transformation of a univariate random variable (www.1.3)
. . . .
E 10 Characteristic function of an exponential transformation of a univariate random variable
(www.1.3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
E 11 Affine equivariance of the expected value (www.1.4) . . . . . . . . . . . . . . . . . . .
E 12 Affine equivariance of the median (www.1.4) . . . . . . . . . . . . . . . . . . . . . . .
E 13 Affine equivariance of the range (www.1.4) . . . . . . . . . . . . . . . . . . . . . . . .
E 14 Affine equivariance of the mode (www.1.4) . . . . . . . . . . . . . . . . . . . . . . . .
E 15 Expected value vs. median of symmetrical distributions (www.1.5) . . . . . . . . . . . .
E 16 Raw moments to central moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
E 17 Relation between the characteristic function and the moments (www.1.6) . . . . . . . .
E 18 First four central moments (www.1.6) . . . . . . . . . . . . . . . . . . . . . . . . . . .
E 19 Central moments of a normal random variable . . . . . . . . . . . . . . . . . . . . . .
E 20 Histogram vs. pdf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
E 21 Sum of random variables via the characteristic function . . . . . . . . . . . . . . . . .
E 22 Sum of random variables via simulation
. . . . . . . . . . . . . . . . . . . . . . . . .
E 23 Simulation of univariate random normal variable
. . . . . . . . . . . . . . . . . . . .
E 24 Simulation of a Student t random variable
. . . . . . . . . . . . . . . . . . . . . . . .
E 25 Simulation of a lognormal random variable . . . . . . . . . . . . . . . . . . . . . . . .
E 26 Raw moments of a lognormal random variable . . . . . . . . . . . . . . . . . . . . . .
E 27 Comparison of the gamma and chi-square distributions . . . . . . . . . . . . . . . . .
1
1
2
2
3
3
3
5
5
5
5
6
6
7
8
8
9
11
11
11
12
12
13
14
14
Multivariate statistics
E 28 Distribution of the grades (www.2.1) . . . . . . . . . . . . . . . . . . . . . . .
E 29 Simulation of random variables by inversion (www.2.1) . . . . . . . . . . . . .
E 30 Pdf of an invertible function of a multivariate random variable (www.2.2) . . . .
E 31 Cdf of an invertible function of a multivariate random variable (www.2.2)
. . .
15
15
15
16
17
ii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4
4
4
CONTENTS
E 32 Pdf of a copula (www.2.3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
E 33 Pdf of the normal copula
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
E 34 Cdf of a copula (www.2.3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
E 35 Cdf of the normal copula
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
E 36 Cdf of the lognormal copula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
E 37 Invariance of a copula (www.2.3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
E 38 Normal copula and given marginals . . . . . . . . . . . . . . . . . . . . . . . . . . . .
E 39 FX copula-marginal factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
E 40 Pdf of an affine transformation of a multivariate random variable (www.2.4) . . . . . .
E 41 Characteristic function of an affine transformation of a multivariate random variable
(www.2.4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
E 42 Pdf of a non-invertible affine transformation of a multivariate random variable (www.2.4)
E 43 Characteristic function of a non-invertible affine transformation of a multivariate random
variable (www.2.4)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
E 44 Affine equivariance of the mode (www.2.5) . . . . . . . . . . . . . . . . . . . . . . . .
E 45 Affine equivariance of the modal dispersion (www.2.5) . . . . . . . . . . . . . . . . . .
E 46 Modal dispersion and scatter matrix (www.2.5) . . . . . . . . . . . . . . . . . . . . . .
E 47 Affine equivariance of the expected value (www.2.6) . . . . . . . . . . . . . . . . . . .
E 48 Affine equivariance of the covariance (www.2.6) . . . . . . . . . . . . . . . . . . . . .
E 49 Covariance and scatter matrix (www.2.6) . . . . . . . . . . . . . . . . . . . . . . . . .
E 50 Regularized call option payoff (www.2.7) . . . . . . . . . . . . . . . . . . . . . . . . .
E 51 Regularized put option payoff (www.2.7) . . . . . . . . . . . . . . . . . . . . . . . . .
E 52 Location-dispersion ellipsoid and geometry . . . . . . . . . . . . . . . . . . . . . . . .
E 53 Location-dispersion ellipsoid and statistics . . . . . . . . . . . . . . . . . . . . . . . .
E 54 The align of the enshrouding rectangle (www.2.8)
. . . . . . . . . . . . . . . . . . . .
E 55 The Chebyshevs inequality (www.2.9)
. . . . . . . . . . . . . . . . . . . . . . . . . .
E 56 Relation between the characteristic function and the moments (www.2.10)
. . . . . . .
E 57 Expected value and covariance matrix as raw moments
. . . . . . . . . . . . . . . . .
E 58 Pdf of a uniform random variable on the ellipsoid (www.2.11) . . . . . . . . . . . . . .
E 59 Characteristic function of a uniform random variable on the ellipsoid (www.2.11) . . .
E 60 Moments of a uniform random variable on the ellipsoid (www.2.11) . . . . . . . . . . .
E 61 Marginal distribution of a uniform random variable on the unit sphere (www.2.11) . . .
E 62 Characteristic function of a multivariate normal random variable (www.2.12)
. . . . .
E 63 Characteristic function of a multivariate normal random variable . . . . . . . . . . . .
E 64 Simulation of a multivariate normal random variable with matching moments
. . . . .
E 65 Pdf of the copula of a bivariate normal random variable (www.2.12) . . . . . . . . . .
E 66 Lognormal random variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
E 67 Pdf of the matrix-valued normal random variable (www.2.13) . . . . . . . . . . . . . .
E 68 Covariance of a matrix-valued normal random variable (www.2.13) . . . . . . . . . . .
E 69 Limit of the Student t distribution (www.2.14) . . . . . . . . . . . . . . . . . . . . . . .
E 70 Mode of a Cauchy random variable (www.2.15)
. . . . . . . . . . . . . . . . . . . . .
E 71 Modal dispersion of a Cauchy random variable (www.2.15) . . . . . . . . . . . . . . .
E 72 Pdf of a log-variable (www.2.16) . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
E 73 Raw moments of a log-variable (www.2.16) . . . . . . . . . . . . . . . . . . . . . . . .
E 74 Relation between the Wishart and the gamma distributions (www.2.17) . . . . . . . . .
E 75 Simulation of a Wishart random variable . . . . . . . . . . . . . . . . . . . . . . . . .
E 76 Pdf of an inverse-Wishart random variable (www.2.17) . . . . . . . . . . . . . . . . . .
E 77 Characteristic function of the empirical distribution . . . . . . . . . . . . . . . . . . .
E 78 Order statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
iii
17
18
18
18
19
19
19
20
20
21
21
22
23
24
24
25
26
26
26
27
28
28
29
30
32
32
33
33
34
35
35
36
37
39
40
40
41
42
44
44
45
46
47
47
48
49
49
. .
.
. .
. .
. .
. .
. .
. .
. .
. .
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
49
50
53
53
54
54
54
55
56
57
57
58
58
59
60
61
61
63
63
64
66
67
67
68
69
70
70
73
73
73
74
76
78
78
79
79
80
81
81
83
83
84
84
85
85
86
87
CONTENTS
E 125 Correlation factors-residual: normal example . . . . . . .
E 126 Factors on demand: horizon effect . . . . . . . . . . . . .
E 127 Factors on demand: no-Greek hedging
. . . . . . . . . .
E 128 Factors on demand: selection heuristics . . . . . . . . . .
E 129 Spectral basis in the continuum (www.3.6) . . . . . . . . .
E 130 Eigenvectors for Toeplitz structure . . . . . . . . . . . . .
E 131 Numerical market projection . . . . . . . . . . . . . . . .
E 132 Simulation of a jump-diffusion process . . . . . . . . . . .
E 133 Simulation of a Ornstein-Uhlenbeck process . . . . . . . .
E 134 Simulation of a GARCH process . . . . . . . . . . . . . .
E 135 Equity market: quest for invariance . . . . . . . . . . . .
E 136 Equity market: multivariate GARCH process . . . . . . .
E 137 Equity market: linear vs. compounded returns projection I
E 138 Equity market: linear vs. compounded returns projection II
E 139 Fixed-income market: quest for invariance
. . . . . . . .
E 140 Fixed-income market: projection of normal invariants
. .
E 141 Fixed-income market: projection of Student t invariants
.
E 142 Derivatives market: quest for invariance
. . . . . . . . .
E 143 Derivatives market: projection of invariants . . . . . . . .
E 144 Statistical arbitrage: co-integration trading . . . . . . . .
4
v
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
87
87
89
90
91
94
94
97
97
98
98
98
99
100
100
100
102
104
104
105
106
106
107
108
109
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
111
112
113
113
114
115
115
116
117
118
119
120
121
122
123
125
127
129
130
130
130
132
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
132
132
133
134
136
137
138
140
141
143
144
144
145
145
146
146
147
147
148
149
149
150
150
Evaluating allocations
E 194 Gamma approximation of the investors objective (www.5.1) . . . . . . . . . . . . . .
E 195 Moments of the approximation of the investors objective (www.5.1) . . . . . . . . . .
E 196 Estimability and sensibility imply consistence with weak dominance (www.5.2) . . . .
E 197 Translation invariance and positive homogeneity imply constancy (www.5.2)
. . . . .
E 198 Consistence with weak dominance (www.5.3) . . . . . . . . . . . . . . . . . . . . . .
E 199 Positive homogeneity of the certainty-equivalent and utility functions (www.5.3) . . . .
E 200 Translation invariance of the certainty-equivalent and utility functions (www.5.3) . . .
E 201 Risk aversion/propensity of the certainty-equivalent and utility functions (www.5.3) . .
E 202 Risk premium in the case of small bets (www.5.3) . . . . . . . . . . . . . . . . . . . .
E 203 Dependence on allocation: approximation in terms of the moments of the objective
(www.5.3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
E 204 First-order sensitivity analysis of the certainty-equivalent I (www.5.3) . . . . . . . . .
E 205 First-order sensitivity analysis of the certainty-equivalent II . . . . . . . . . . . . . .
E 206 Second-order sensitivity analysis of the certainty-equivalent (www.5.3)
. . . . . . . .
E 207 Interpretation of the certainty-equivalent
. . . . . . . . . . . . . . . . . . . . . . . .
E 208 Certainty-equivalent computation I
. . . . . . . . . . . . . . . . . . . . . . . . . . .
E 209 Certainty-equivalent computation II . . . . . . . . . . . . . . . . . . . . . . . . . . .
E 210 Constancy of the quantile-based index of satisfaction (www.5.4) . . . . . . . . . . . .
E 211 Homogeneity of the quantile-based index of satisfaction (www.5.4) . . . . . . . . . . .
E 212 Translation invariance of the quantile-based index of satisfaction (www.5.4) . . . . . .
E 213 Example of strong dominance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
E 214 Example of weak dominance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
E 215 Additive co-monotonicity of the quantile-based index of satisfaction (www.5.4) . . . .
E 216 Cornish-Fisher approximation of the quantile-based index of satisfaction (www.5.4)
.
151
151
155
157
158
159
160
160
162
162
163
164
164
166
167
167
168
169
169
170
170
171
171
171
CONTENTS
vii
172
173
176
176
177
178
179
179
181
181
181
182
183
184
185
186
187
187
188
188
189
Optimizing allocations
E 238 Feasible set of the mean-variance efficient frontier (www.6.1)
. . . . . . . . . . . . .
E 239 Maximum achievable certainty-equivalent with exponential utility I (www.6.1)
. . . .
E 240 Maximum achievable certainty-equivalent with exponential utility II (www.6.1) . . . .
E 241 Results on constrained optimization: QCQP as special case of SOCP (www.6.2)
. . .
E 242 Feasible set of the mean-variance problem in the space of moments (www.6.3) . . . . .
E 243 Reformulation of the efficient frontier with affine constraints (www.6.3)
. . . . . . . .
E 244 Least-possible variance allocation (www.6.3) . . . . . . . . . . . . . . . . . . . . . .
E 245 Highest-possible Sharpe ratio allocation (www.6.3) . . . . . . . . . . . . . . . . . . .
E 246 Geometry of the mean-variance efficient frontier (www.6.3) . . . . . . . . . . . . . . .
E 247 Reformulation of the efficient frontier with linear constraints (www.6.3) . . . . . . . .
E 248 Effect of correlation on the mean-variance efficient frontier: total correlation case
(www.6.4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
E 249 Effect of correlation on the mean-variance efficient frontier: total anti-correlation case
(www.6.4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
E 250 Total return efficient allocations in the plane of relative coordinates (www.6.5)
. . . .
E 251 Benchmark-relative efficient allocation in the plane of absolute coordinates (www.6.5)
E 252 Formulation of mean-variance in terms of returns I (www.6.6) . . . . . . . . . . . . .
E 253 Formulation of mean-variance in terms of returns II (www.6.6) . . . . . . . . . . . . .
E 254 Mean-variance pitfalls: two-step approach I
. . . . . . . . . . . . . . . . . . . . . .
E 255 Mean-variance pitfalls: two-step approach II . . . . . . . . . . . . . . . . . . . . . .
E 256 Mean-variance pitfalls: horizon effect . . . . . . . . . . . . . . . . . . . . . . . . . .
E 257 Benchmark driven allocation I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
E 258 Benchmark driven allocation II
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
E 259 Mean-variance for derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
E 260 Dynamic strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
E 261 Buy and hold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
191
191
191
192
193
194
198
198
198
199
199
199
201
202
204
205
205
206
206
207
207
209
209
209
211
. . .
. . .
. .
. . .
. . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
211
213
213
214
215
217
217
219
221
223
224
225
227
228
232
234
235
237
238
239
240
241
Evaluating allocations
243
E 283 Optimal allocation as function of invariant parameters (www.8.1) . . . . . . . . . . . 243
E 284 Statistical significance of sample allocation (www.8.2) . . . . . . . . . . . . . . . . . 244
E 285 Estimation risk and opportunity cost . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Optimizing allocations
E 286 Allocation of the resampled allocation (www.9.1) . . . . . . . . . .
E 287 Probability bounds for the sample mean (www.9.2) . . . . . . . . .
E 288 Bayes rule (www.9.3) . . . . . . . . . . . . . . . . . . . . . . . .
E 289 Black-Litterman posterior distribution (www.9.3) . . . . . . . . . .
E 290 Black-Litterman conditional distribution (www.9.4) . . . . . . . . .
E 291 Black-Litterman conditional expectation (www.9.4) . . . . . . . . .
E 292 Black-Litterman conditional covariance (www.9.4) . . . . . . . . .
E 293 Computations for the robust version of the leading example (www.9.5)
E 294 Computations for the robust mean-variance problem I (www.9.6) . .
E 295 Computations for the robust mean-variance problem II (www.9.6) .
E 296 Restating the robust mean-variance problem in SeDuMi format
. .
E 297 Normal predictive distribution (www.9.7) . . . . . . . . . . . . . .
E 298 The robustness uncertainty set for the mean vector (www.9.8)
. . .
E 299 The robustness uncertainty set for the covariance matrix (www.9.8)
E 300 Robust Bayesian mean-variance problem (www.9.8)
. . . . . . . .
E 301 Robust mean-variance for derivatives . . . . . . . . . . . . . . . .
E 302 Black-Litterman and beyond I . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
246
246
246
247
248
251
252
253
253
255
257
258
259
262
264
267
267
268
CONTENTS
ix
270
0.1
Preface
This exercise book supports the review sessions of the 6-day intensive course Advanced Risk and Portfolio
R
Management (ARPM) Bootcamp
. The latest version of this exercise book is available at www.symmys.
com/node/170.
This exercise book complements the Textbook Risk and Asset Allocation - Springer, by Attilio Meucci
(Meucci, 2005). Each chapter of this exercise book refers to the respective chapter in the textbook. Icons
indicate if an exercise is theoretical or code-based. The number of stars corresponds to the difficulty of
an exercise.
R
The MATLAB
files and this exercise book are provided "as is": no claim of accuracy is made and no
R
responsibility is taken for possible errors. Both this exercise book and the MATLAB
scripts can and
must be used and distributed freely. Please quote the author and the source: "Attilio Meucci, ARPM Advanced Risk and Portfolio Management".
Any feedback is highly appreciated, please contact the author at .
Attilio Meucci is grateful to David Ardia for his help editing and consolidating this exercise book.
Chapter 1
Univariate statistics
E 1 Pdf of an invertible transformation of a univariate random variable (www.1.1)
Consider the following transformation of the generic random variable X:
X 7 Y g(X) ,
(1.1)
fY (y) =
fX (g 1 (y))
.
|g 0 (g 1 (y))|
(1.2)
Solution of E 1
By the definition (1.3, AM 2005) of the pdf fY we have:
fY (y)dy = P{Y [y, y + dy]}
= P{g(X) [y, y + dy]}
= P{X [g 1 (y), g 1 (y + dy)]}
Z g1 (y+dy)
=
fX (x)dx ,
(1.3)
g 1 (y)
where the third equality follows from the invertibility of the function g. On the other hand, from a Taylor
expansion we obtain:
g 1 (y + dy) = g 1 (y) +
1
dy .
g 0 (g 1 (y))
(1.4)
1
g 1 (y)+ g0 (g1
dy
(y))
g 1 (y)
fX (x)dx = fX (g 1 (y))
dy ,
0
1
g (g (y))
1
(1.5)
(1.6)
Solution of E 2
By the definition (1.7, AM 2005) of the cdf FY we have:
FY (y) P {Y y}
= P {g(X) y}
= P X g 1 (y)
(1.7)
= FX (g 1 (y)) ,
where the third equality follows from the invertibility of the function g, under the assumption that g is an
increasing function of its argument.
Note. In case g is a decreasing function of its argument we obtain:
FY (y) P{Y y}
= P{g(X) y}
= P{X g 1 (y)} = 1 P{X g 1 (y)}
(1.8)
= 1 FX (g 1 (y)) .
(1.9)
Solution of E 3
Consider the following series of identities that follow from the definition (1.7, AM 2005) of the cdf FY :
FY (g(QX (p))) P{Y g(QX (p))} = P{X QX (p)} = p ,
(1.10)
where the second equality follows from the invertibility of the function g, under the assumption that g
is an increasing function of its argument. By applying the FY1 to the first and last terms and using the
definition (1.17, AM 2005) of the quantile QY we obtain the desired result.
Note. In the case where g is a decreasing function of its argument we have:
QY (p) = g(QX (1 p)) .
(1.11)
(1.12)
ym
s
.
(1.13)
Solution of E 4
We use the fact that:
g 1 (y) =
ym
,
s
g 0 (x) = s ,
(1.14)
in (1.2).
ym
s
.
(1.15)
Solution of E 5
We use the fact that:
g 1 (y) =
ym
,
s
g 0 (x) = s ,
(1.16)
in (1.6).
(1.17)
Solution of E 6
We use the fact that:
g 1 (y) =
in (1.9).
ym
,
s
g 0 (x) = s ,
(1.18)
(1.19)
Solution of E 7
We use the definition (1.12, AM 2005) of the characteristic function:
o
n
Y () E{eiY } = E ei(m+sX) = eim E eisX .
(1.20)
(1.21)
show that:
fY (y) =
1
fX (ln(y)) .
y
(1.22)
Solution of E 8
We use the fact that:
g 1 (y) = ln(y) ,
g 0 (x) = ex ,
(1.23)
in (1.2).
(1.24)
Solution of E 9
We use the fact that:
g 1 (y) = ln(y) ,
in (1.6).
g 0 (x) = ex ,
(1.25)
(1.26)
Solution of E 10
We use the fact that:
g 1 (y) = ln(y) ,
g 0 (x) = ex ,
(1.27)
in (1.9).
Z
(m + s x)fX (x)dx = m
Z
fX (x)dx + s
x fX (x)dx
R
(1.28)
m + s E {X} .
1
1
= m + s QX
2
2
(1.29)
m + s Med {X} .
(1.30)
(1.31)
s Ran {X} .
yR
= argmax fX
yR
ym
s
1
fX
s
ym
s
= m + s argmax {fX (x)}
(1.32)
xR
m + s Mod{X} .
x
e
fX (x)dx +
fX (x)dx = 1 .
(1.33)
x
e
If x
e is the symmetry point, then both terms in the left-hand side are equal:
Z
2
x
e
fX (x)dx = 1 ,
(1.34)
and therefore:
Z
FX (e
x)
x
e
fX (x)dx =
1
.
2
(1.35)
Z
E{X}
x fX (x)dx
Z
=x
e + (x x
e)fX (x)dx
ZR
=x
e + u fX (e
x + u)du = x
e.
R
(1.36)
where the last integral in (1.36) is null since due to (1.28, AM 2005) we have:
Z
u fX (e
x u)du = 0 .
u fX (e
x + u)du =
(1.37)
Med{X} QX
1
1
1
= FX
=x
e.
2
2
(1.38)
n = 1, 2, . . . ,
(1.39)
n = 2, 3, . . . .
(1.40)
Determine how to map the first n raw moments into the first n central moments, and how to map the first
R
n central moments into the first n raw moments. Write two MATLAB
functions which implement the
respective mappings.
Solution of E 16
Consider first the mapping of the raw moments to the central moments. For n > 1, from the definition of
central moment (1.40) and the binomial expansion we obtain:
n
CMX
n E{(X E{X}) }
(n1
)
X
k
n
=E
(1)nk CMX
nk X + X
k=0
n1
X
k
(1)nk CMX
+ E {X n }
nk E X
k=0
n1
X
X
X
(1)nk CMX
nk RMk + RMn .
k=0
(1.41)
Now, for the mapping of the central moments to the raw moments, we have from (1.41) the following
recursive formula:
X
RMX
n = CMn +
n1
X
X
(1)nk+1 CMX
nk RMk ,
(1.42)
k=0
X
R
which is initialized with RMX
functions Raw2Central and Central2Raw for
1 = CM1 . See the MATLAB
the implementations of the mappings.
(i)k
RMX
k + ,
k!
(1.43)
dk X ()
.
d k =0
(1.44)
e
f
(x)dx
X
X
d k
d k
R
Z
= ik
eix xk fX (x)dx .
(1.45)
Therefore:
Z
dk X ()
k
=i
xk fX (x)dx = ik E X k ,
k
d
R
=0
(1.46)
and substituting this in (1.44) concludes the solution. The expected value is obtained as a special case
with k = 1.
CMX
k =
k
X
k!(1)kj
j=0
j!(k j)!
X kj
RMX
,
j (RM1 )
(1.47)
see e.g. Abramowitz and Stegun (1974), derive the following first four central moments:
X 2
X
CMX
2 = (RM1 ) + RM2
X 3
X
X
X
CMX
3 = 2(RM1 ) 3(RM1 )(RM2 ) + RM3
CMX
4
4
3(RMX
1 )
X
2
6(RMX
1 ) (RM2 )
4 RMX
1
(1.48)
RMX
3
+ RMX
4
and show that these expressions in turn allow to easily compute variance, standard deviation, skewness
and kurtosis.
Solution of E 18
Simply express formula (1.47) for k = 2, 3, 4. Expressions for the variance, standard deviation, skewness
and kurtosis follow from their definitions:
Var{X} E{(X E{X})2 } = CMX
2
q
p
X
Sd{X} Var{X} = CM2
(1.49)
(1.50)
Sk{X}
E{(X E{X})3 }
CMX
3
=
3/2
(Sd{X})3
CMX
2
(1.51)
Ku{X}
E{(X E{X})4 }
CMX
4
=
.
X 2
(Sd{X})4
CM2
(1.52)
(1.53)
X N(, 2 ) ,
(1.54)
of a normal distribution:
ezx fX (x)dx ,
(1.55)
(1.56)
is the raw moment. This follows from explicitly applying D on both sides of (1.55). The moment
generating function is the characteristic function X () defined in (1.12, AM 2005) evaluated at z/i:
MX (z) = X (z/i) .
(1.57)
Solution of E 19
First we focus on the raw moments of the standard normal distribution:
Y N(0, 1) .
1
(1.58)
2
From (1.69, AM 2005) and (1.57) we obtain MY (z) e 2 z . Computing the derivatives:
1
D0 MY (z) = e 2 z
D1 MY (z) = ze 2 z
D2 MY (z) = e 2 z + z 2 e 2 z
2
D3 MY (z) = z 3 e 2 z + 3ze 2 z
2
D4 MY (z) = 3e 2 z + 6z 2 e 2 z + z 4 e 2 z
1
(1.59)
RMYn
=
0
(n 1)!!
if n is odd
if n is even ,
(1.60)
(1.61)
(1.62)
X
n
because X = Y . Hence CMX
and:
n = RMn
CMX
n
=
0
n (n 1)!!
if n is odd
if n is even .
(1.63)
11
Xt = X,
t = 1, . . . T ,
(1.64)
and their realizations iT {x1 , . . . , xT }. Consider the histogram of the empirical pdf Em(iT ) stemming
from the realization iT , as defined in (1.119, AM 2005), where the width of all the bins is . Show that
the histogram represents a regularized version of the true pdf, rescaled by the factor T .
Solution of E 20
The Glivenko-Cantelli theorem (4.34, AM 2005) states that, under a few mild conditions, the empirical
distribution converges to the true distribution of X as the number of observations T goes to infinity. In
terms of the pdf, the Glivenko-Cantelli theorem reads:
f iT
T
1 X (xt )
fX .
T
T t=1
(1.65)
Denoting #
i the number of points included in the generic i-th bin, the following relation holds:
#
i
xi +
2
T
xi
2
(1.66)
X =Y +Z,
(1.67)
where Y and Z are independent. Compute the characteristic function X of X from the characteristic
functions Y of Y and Z of Z.
Solution of E 21
n
o
X () E eiX = E ei(Y +Z) = E eiY eiZ
= E eiY E eiZ Y ()Z () .
(1.68)
(1.69)
(1.70)
(1.71)
where 0.1 and 2 0.2. Consider the random variable defined as:
Z X +Y .
(1.72)
R
Write a MATLAB
script which you:
Generate a sample of 10,000 draws X from the Student t above, a sample Y of equal size from the
lognormal, sum them term by term (do not use loops) and obtain a sample Z of their sum;
Plot the sample Z. Do not join the observations (use the plot option . as in a scatter plot);
Plot the histogram of Z. Use hist and choose the number of bins appropriately;
Plot the empirical cdf of Z. Use [f,z] = ecdf(Z) and plot(z, f);
Plot the empirical quantile of Z. Use prctile.
Solution of E 22
R
See the MATLAB
script S_NonAnalytical.
R
Hint. Notice that the MATLAB
built-in functions take and 2 as inputs.
Solution of E 23
From (1.71, AM 2005) we have E {X} = and from (1.72, AM 2005) Var {X} = 2 . For the
R
implementation, see the MATLAB
script S_NormalSample.
(1.73)
Knowing that 2 6, determine and such that E {X} 2 and Var {X} 7. Then write a
R
MATLAB
script in which you:
Generate a sample X_a from (1.73) using the built-in Student t number generator;
Generate a sample X_b from (1.73) using the normal number generator, the chi-square number
generator and the following result:
Y
d
X =+ p
,
Z/
(1.74)
Z 2 ;
(1.75)
13
Generate a sample X_c of observations from (1.73) using the uniform generator number, tinv and
(2.27, AM 2005);
In a figure, subplot the histogram of the simulations of X_a, subplot the histogram of the simulations
of X_b and subplot the histogram of the simulations of X_c;
Compute the empirical quantile functions of the three simulations corresponding to the confidence
grid G {0.01, 0.02, . . . , 0.99};
In a separate figure superimpose the plots of the above empirical quantiles, which should coincide.
Use different colors.
q
2 .
Note. There is a typo in (1.90, AM 2005), which should be Sd{X} = 2
Solution of E 24
From (1.89, AM 2005) E{X} = and Var{X} =
script S_StudentTSample for the implementation.
2
2 ,
R
which leads to = 14. See the MATLAB
(1.76)
R
Write a MATLAB
function which determine and 2 from E{X} and Var{X}, and use it to determine
R
2
and such that E {X} 3 and Var {X} 5. Then write a MATLAB
script in which you:
Generate a large sample X from this distribution using lognrnd;
Plot the sample. Do not join the observations (use the plot option . as in a scatterplot);
Plot the histogram. Use hist and choose the number of bins appropriately;
Plot the empirical cdf. Use [f,x] = ecdf(X) and plot(x, f);
Superimpose (use hold on) the exact cdf as computed by logncdf. Use a different color;
Plot the empirical quantile. Use prctile;
Superimpose (use hold on) the exact quantile as computed by logninv. Use a different color.
R
Note. The MATLAB
built-in functions take and 2 as inputs.
Solution of E 25
From (1.98, AM 2005)-(1.99, AM 2005) we need to solve for and 2 the following system:
E = e+
2
2
V = e2+ (e 1) ,
(1.77)
or:
2 ln(E) = 2 + 2
2
ln(V ) = 2 + 2 + ln(e 1) .
(1.78)
Therefore:
ln
or:
V
E2
= ln(e 1) ,
(1.79)
V
2 = ln 1 + 2 .
E
(1.80)
1
V
= ln(E) ln 1 + 2 .
2
E
(1.81)
R
See the MATLAB
function LognormalMoments2Parameters and the script S_LognormalSample for the implementation.
(1.82)
n
Compute the raw moments RMX
n E {X } for all n = 1, 2, . . ..
Solution of E 26
From (1.94, AM 2005) we have:
d
X n = enY ,
(1.83)
(1.84)
n+n
RMX
n =e
2 /2
(1.85)
(1.86)
Determine for which values of , and 2 this distribution coincides with the chi-square distribution
with ten degrees of freedom?
Hint. We recall that such variable is defined in distribution as follows:
d
X = Y12 + + Y2 ,
d
(1.87)
Chapter 2
Multivariate statistics
E 28 Distribution of the grades (www.2.1)
Prove that the grade of X, U FX (X), is uniformly distributed on [0, 1]:
U U([0, 1]) .
(2.1)
Solution of E 28
From the standard uniform distribution defined in (1.54, AM 2005), we have to show that:
0
u
P {U u} =
if
if
if
u0
u [0, 1]
u 1.
(2.2)
We first observe that by the definition of the cdf (1.7, AM 2005) the variable U always lies in the interval
[0, 1], therefore:
P {U u} =
0
1
if
if
u0
u 1.
(2.3)
As for the remaining cases, from the definition of the quantile function (1.17, AM 2005) we obtain:
P {U u} = P {FX (X) u}
= P {X QX (u)}
(2.4)
= FX (QX (u)) = u .
QZ (U ) = Z ,
d
(2.5)
Solution of E 29
P {QZ (U ) z} = P {U FZ (z)}
= P {FZ (Z) FZ (z)}
(2.6)
= P {Z z} .
(2.7)
meaning that each entry yn gn (x) is a non-decreasing function of any of the arguments (x1 , . . . , xN ).
Show that:
fY (y) =
fX (g1 (y))
,
|Jg (g1 (y))|
(2.8)
gm (x)
.
xn
(2.9)
Solution of E 30
From the definition of the pdf (2.4, AM 2005) we can write:
fy (y)dy P {g(X) [y, y + dy]}
= P X g1 (y), g1 (y + dy)
Z
=
fX (x)dx .
(2.10)
(2.11)
Therefore:
Z
fY (y)dy =
fX (x)dx
[g1 (y),g1 (y)+[Jg (g1 (y))]1 dy]
1
= fX (g1 (y)) Jg (g1 (y)) dy ,
(2.12)
where the determinant accounts for the difference in volume between the infinitesimal parallelotope with
sides dy and the infinitesimal parallelotope with sides dx, see (A.34, AM 2005). Using (A.83, AM 2005)
we obtain the desired result.
Note. To compute the pdf of the variable Y, we do not need to assume that the function g is increasing.
Indeed, as long as g is invertible, it suffices to replace the absolute value of the determinant in (2.12).
Thus in this slightly more general case we obtain:
fX (g1 (y))
fY (y) = q
.
2
|Jg (g1 (y))|
17
(2.13)
(2.14)
Solution of E 31
From the definition (2.9, AM 2005) of the cdf FY we have:
FY (y) P {Y y} = P {g(X) y} = P X g1 (y) = FX (g1 (y)) .
(2.15)
(2.16)
where g is defined component-wise in terms of the cdf FXn of the the generic n-th component Xn :
gn (x1 , . . . , xN ) FXn (xn ) .
(2.17)
This is an invertible increasing transformation and we can use (2.8). From (1.17, AM 2005) the inverse
of this transformation is the component-wise quantile:
gn1 (u1 , . . . , uN ) QXn (un ) ,
(2.18)
By definition, the copula of X is the distribution of U. Since the pdf is the derivative of the cdf, the
Jacobian (2.9) of the transformation reads:
J = diag(fX1 , . . . , fXN ) ,
(2.19)
(2.20)
which yields:
fU (u1 , . . . , uN ) =
(2.21)
(2.22)
Pick and of your choice and plot the ensuing surface using surf.
Hint. See (2.30, AM 2005). Since is a generic N 1 vector and is a generic symmetric and positive
N N matrix, you need the multivariate normal distribution function. Use mvnpdf and norminv. For the
display, calculate the pdf value on each grid point, which gives you a 19 19 matrix.
Solution of E 33
R
script S_DisplayNormalCopulaPdf.
See the MATLAB
(2.23)
Solution of E 34
Use (2.14) with gn1 (u1 , . . . , un ) QXn (un ).
0
0
,
1
1
.
(2.24)
Pick as you please, but make sure to play around with the values 0.99, 0.99 and 0.
R
Write a MATLAB
script which evaluates the copula cdf at a select grid of bivariate values:
u G [0.05 : 0.05 : 0.95] [0.05 : 0.05 : 0.95] .
(2.25)
Do not call functions from within the script. In a separate figure, plot the ensuing surface using surf.
Hint. Calculate the cdf value on each grid point, which gives you a 19 19 matrix. Use (2.31, AM 2005)
and the built-in function mvncdf.
19
Solution of E 35
R
See the MATLAB
script S_DisplayNormalCopulaCdf.
(2.26)
(2.27)
(2.28)
On the other hand, the invariance property of the quantile (1.9), reads in this context:
QYn (un ) = hn (QXn (un )) .
(2.29)
X1 Ga(1 , 12 )
X2 LogN(2 , 22 ) ,
where 1 9, 12 2, 2 0 and 22 0.04;
(2.30)
In a separate figure, subplot the histogram of the simulations for X1 and subplot the histogram of
the simulations of X2 ;
Comment on how these histograms, which represent the marginal pdfs of X1 and X2 , change as
the correlation r of the normal distribution varies;
Scatter-plot the simulations of X1 against the respective simulations of X2 ;
Use hist3 to plot the respective 3D-histogram to visualize the joint pdf of X1 and X2 ;
Plot the histogram of the grade of X1 and subplot the histogram of the grade of X2 ;
Scatter-plot the simulations of the grade of X1 against the respective simulations of the grade of
X2 .
Hint. You are asked to generate a bivariate sample, which has a marginal gamma distribution and a lognormal distribution but with a copula which is the same as the copula from a bivariate normal distribution.
You will notice that the correlation of this normal distribution is r, but no other information is provided
on the expected values or the standard deviations. Why? See (2.38, AM 2005). Therefore, first generate a
bivariate normal distribution sample with correlation r; then calculate its copula using (2.28, AM 2005);
finally remap it to the bivariate distribution you want using (2.34, AM 2005).
Solution of E 38
R
See the MATLAB
script S_BivariateSample.
E 39 FX copula-marginal factorization
R
script in which you:
Write a MATLAB
Load from DB_FX the daily observations of the foreign exchange rates USD/EUR, USD/GBP and
USD/JPY. Define as variables the daily log-changes of the rates;
Represent the marginal distribution of the three variables and display the respective histograms;
Represent the copula of the three variables and display the scatter-plot of the copula of all pairs of
variables.
Hint. Applying the marginal cdf to the simulations of a random variable is equivalent to sorting.
Solution of E 39
R
See the MATLAB
script S_FxCopulaMarginal.
(2.31)
fY (y) =
fX (B1 (y m))
p
.
|BB0 |
(2.32)
Jg B .
(2.33)
Solution of E 40
In this case the Jacobian (2.9) is:
21
(2.34)
fY (y) =
fX (B1 (y m))
p
.
|BB0 |
(2.35)
E 41 Characteristic function of an affine transformation of a multivariate random variable (www.2.4) (see E 40)
Consider the same setup than in E 40. Show that:
0
Y () = ei m X (B0 ) .
(2.36)
Solution of E 41
From the definition (2.13, AM 2005) of the characteristic function:
n 0 o
Y () E ei Y
n 0
o
= E ei (m+BX)
n
o
0
0
0
= ei m E ei(B ) X
(2.37)
= ei m X (B0 ) .
(2.38)
rank(B) 6= N dim(X) .
(2.39)
where:
(2.40)
Solution of E 42
The distribution of is the marginal distribution of any invertible affine transformation that extends
(2.40):
Y2
..
.
BX .
(2.41)
YN
For example, we can extend (2.40) defining B as follows:
b1
0N 1
(b2 , . . . , bN )
IN 1
,
(2.42)
f () =
RN 1
(2.43)
Nevertheless, it is in general very difficult to perform this last step, as it involves a multiple integration.
For instance, if b1 6= 0 we can choose the extension B according to (2.42) we obtain:
B1 =
1
b1
0N 1
N)
(b2 ,...,b
b1
IN 1
,
(2.44)
Z
fX
RN 1
b2
bN
y2
yN , y2 , . . . , yN
b1
b1
b1
dy2 dyN .
(2.45)
E 43 Characteristic function of a non-invertible affine transformation of a multivariate random variable (www.2.4) (see E 42)
Consider the same setup than in E 42. Determine the expression for the characteristic function .
Solution of E 43
The characteristic function of (2.40) is obtained by setting to zero in (2.36) the dependence on the ancillary variables (2.41) as in (2.24, AM 2005):
() = Y (, 0N 1 )
= X B0
.
0N 1
(2.46)
23
For instance, if we choose the extension B according to (2.42) we obtain from (2.46) that the characteristic
function of (2.40) reads:
() = X (b) .
(2.47)
(2.48)
of the N -dimensional random variable X. Prove that the mode is affine equivariant, i.e. it satisfies (2.51,
AM 2005), which in this context reads:
Mod {a + BX} = a + B Mod {X} .
(2.49)
Hint. From (2.32) we derive the vector of the first order derivatives of the pdf of Y in terms of the pdf of
X:
fY (y)
(B0 )1 fX
=p
.
y
|BB0 | x x=B1 (ya)
(2.50)
(2.51)
Solution of E 44
By its definition (2.52, AM 2005), the mode Mod {X} is the maximum. Thus it is determined by the
following first order condition:
fX
= 0.
x x=Mod{X}
(2.52)
fY
(B0 )1 fX
p
=
= 0.
y y=a+B Mod{X}
|BB0 | x x=Mod{X}
(2.53)
(2.54)
MDis {X}
!1
2 ln fX
xx0 x=Mod{X}
!1
1 fX
=
x fX x0 x=Mod{X}
!1
1 2 fX
1 fX fX
=
2
fX xx0 x=Mod{X} fX
x x0 x=Mod{X}
!1
2 fX
.
= fX (Mod {X})
xx0 x=Mod{X}
(2.55)
= fY (Mod {Y})
= fY (Mod {Y})
!1
2 fY
yy0 y=Mod{Y}
!1
(B0 )1 2 fX
1
p
B
|BB0 | xx0 x=B1 (Mod{Y}a)
!1
(B0 )1 2 fX
1
p
B
.
|BB0 | xx0 x=Mod{X}
(2.56)
fX (Mod {X})
MDis {Y} = p
|BB0 |
(2.57)
(2.58)
25
Solution of E 46
It is immediate to check from the definition (2.65, AM 2005) that the modal dispersion is a symmetric
matrix. Furthermore, the mode is a maximum for the log-pdf, and therefore the matrix of the second
derivatives of the log-pdf at the mode is negative definite. Therefore, the modal dispersion is positive
definite. Affine equivariance, symmetry and positivity make the modal dispersion a scatter matrix.
(2.59)
(2.60)
e and (N K) elements to a
e.
Hint. Consider adding (N K) non-collinear rows to B
Solution of E 47
e (N K) elements a to a
e and denoting Y a set of (N K)
Adding (N K) non-collinear rows B to B,
ancillary random variables as follows:
Y
e
Y
Y
,
e
a
a
,
e
B
B
,
(2.61)
(2.62)
(2.63)
fX (x)
e p
(e
a + Bx)
|B| dx
|BB0 |
RN
Z
e
e+B
=a
xfX (x)dx
(2.64)
RN
e E {X} .
e+B
=a
(2.65)
Moreover, prove that the covariance matrix is a scatter matrix, i.e. it is affine equivariant, symmetric and
positive definite.
Solution of E 48
From the definition of covariance (2.67, AM 2005) and the equivariance of the expected value (2.60) we
obtain:
n
o
n
o
n
o0
e
e E a
e
e E a
e
e + BX
e + BX
e + BX
e + BX
e + BX
Cov a
E
a
a
n
o
e
e0
= E B(X
E {X})(X E {X})0 B
(2.66)
e Cov {X} B
e0 ,
=B
where the last equality follows from the linearity of the expectation operator (B.56, AM 2005).
(2.67)
which proves the positiveness of the covariance matrix. Affine equivariance, symmetry and positivity
make the covariance matrix a scatter matrix.
C (x) =
(x K)
2
2
1
xK
1 + erf
+ e 22 (xK) .
2
2
2
(2.68)
27
i
h
C (x) C (0) (x)
Z +
2
1
1
=
max(y K, 0)e 22 (xy) dy
2
Z +
2
1
1
(y K)e 22 (yx) dy
=
2 K
Z +
2
1
1
(u + x K)e 22 u du
=
2 Kx
Z +
Z +
2
2
1
1
1
1
ue 22 u du +
e 22 u du
(x K)
=
2 Kx
2
Kx
Z +
Z +
2
d h 2 12 u2 i
1
(x K) 2
e 2
ez dz ,
=
du +
Kx
2
2 Kx du
2
(2.69)
2
where in the last line we used the change of variable u/ 22 z. Using the relation (B.78, AM 2005)
between the complementary error function and the error function, as well as (B.76, AM 2005), i.e. the
fact that the error function is odd, we obtain the desired result.
P (x) =
(x K)
2
1 erf
xK
22
2
1
+ e 22 (xK) .
2
(2.70)
=
(y K)e 22 (yx) dy
2
Z Kx
2
1
1
=
(u + x K)e 22 u du
2
Z Kx
Z Kx
2
2
1
1
1
1
=
ue 22 u du
(x K)
e 22 u du
2
2
"
#
Kx
Z Kx
Z
h
i
2
2
(x K) 2
1
d 2 12 u
22
z
du
=
e 2
e dz ,
2
2 du
(2.71)
where in the last line we used the change of variable u/ 22 z. Using the relation (B.78, AM 2005)
between the complementary error function and the error function, as well as (B.76, AM 2005), i.e. the
fact that the error function is odd, we obtain the desired result.
(2.72)
Solution of E 52
The vector m represents the center of ellipsoid. The eigenvectors are the directions of the principal axes
of the ellipsoid. The square root of the eigenvalues are the length of the principal axes of the ellipsoid.
There is no statistical interpretation, as long as m and S are not the expected value and the covariance
matrix respectively of a multivariate distribution.
ln(X) St(, , ) ,
(2.73)
where 40, 0.5 and diag() 0.01 (you can choose the off-diagonal element). Consider the
generic vector in the plane:
e
cos
sin
.
(2.74)
Consider the random variable Z e0 X, namely the projection of X on the direction e . In the same
R
MATLAB
script:
Compute and plot the sample standard deviation of Z as a function of [0, ] (select a grid
of 100 points);
Show in a figure that the minimum and the maximum of are provided by versors (normalized
vector) parallel to the principal axes of the ellipsoid defined by the sample mean m and the sample
covariance S as plotted by the function TwoDimEllipsoid;
Compute the radius r , i.e. the distance between the surface of the ellipsoid and the center of the
ellipsoid along the direction of the vector as a function of [0, ] (select a grid of 100 points);
In a separate figure superimpose the plot of and the plot of r , showing that the minimum and the
maximum of (i.e. the minimum and the maximum volatility), correspond to the the minimum
and the maximum of r respectively (i.e. the length of the smallest and largest principal axis).
Notice that the radius equals the standard deviation only on the principal axes.
R
Hint. You will have to shift and rescale the output of the MATLAB
function mvtrnd. Also, to compute
r notice that it satisfies:
29
(r e )0 S1 (r e ) = 1 .
Solution of E 53
R
See the MATLAB
script S_MaxMinVariance.
(2.75)
(2.76)
(2.77)
(2.78)
To find the tangency condition of the ellipsoid with the rectangle we compute the gradient of the implicit
representation of EE,Cov :
g
= 2 Cov1 (x E) .
x
(2.79)
Since the generic n-th side of the rectangle is perpendicular to the n-th axis, when the gradient is parallel
to the n-th axis, the rectangle is tangent to the ellipsoid. Therefore, to find the tangency condition we
must impose the following condition:
Cov1 (x E) = (n) ,
(2.80)
where is some scalar that we have to compute and (n) is the n-th element of the canonical basis of
RN , see (A.15, AM 2005). To compute we substitute (2.80) in (2.77):
1 = (x E)0 Cov1 (x E)
= ( Cov (n) )0 Cov1 ( Cov (n) )
(2.81)
= Var {Xn } ,
so that:
1
.
Sd {Xn }
(2.82)
=
.
Sd {Xn }
Sd {Xn }
(2.83)
q
RN /Ev,U
q
RN /Ev,U
q 2 fX (x)dx
(X v)0 U1 (x v)fX (x)dx
(2.84)
a(v, U) .
Notice that we can re-write a(v, U) as follows:
a(v, U) = tr(E {(X v)(X v)0 } U1 ) .
(2.85)
(2.86)
Now we prove that the minimum of (2.85) is (2.86). In other words, among all possible vectors v and
symmetric, positive matrices U such that:
|U| = |Cov {X}| ,
(2.87)
the minimum value of (2.85) is achieved by the choice v E {X} and U Cov {X}. Consider an
arbitrary vector u and a perturbation:
v 7 v + u .
If v minimizes (2.85), in the limit 0 we must have:
(2.88)
31
0 = E (X (v + u))0 U1 (X (v + u))
E (X v)0 U1 (X v)
2 E u0 U1 (X v) = 2u0 U1 (E {X} v) ,
(2.89)
(2.90)
U 7 U(I + B) ,
(2.91)
where I is the identity matrix and B is a matrix that preserves the volumes. From (A.77, AM 2005) this
means:
|U(I + B)| = |U| .
(2.92)
In the limit of small perturbations 0, from (A.122, AM 2005) this condition becomes:
tr(B) = 0 .
(2.93)
) tr(Cov {X} U1 )
(2.94)
(2.95)
(2.96)
X () = 1 + i
N
X
n RMX
n +
n=1
N
X
i
(n1 nk ) RMX
+
n1 nk + .
k! n ,...,n =1
k
(2.97)
where RMX
n1 nk is defined as follows:
RMX
n1 nk
k X ()
.
n1 nk =0
(2.98)
By performing the derivatives on the definition (2.13, AM 2005) of the characteristic function we obtain:
Z
0
k
k
{X ()}
ei x fX (x)dx
n1 nk
n1 nk
RN
Z
0
= ik
xn1 xnk ei X fX (x)dx .
(2.99)
RN
Therefore:
Z
k X ()
k
=i
xn1 xnk fX (x)dx = ik E {Xn1 Xnk } .
n1 nk =0
RN
(2.100)
(2.101)
E {Xn } =
On the other hand, the k-th central moment:
RMX
n
1 X ()
=
.
i n =0
(2.102)
33
CMX
n1 nk E {(Xn1 E {Xn1 }) (Xnk E {Xnk })} ,
(2.103)
is a function of the raw moments of order up to k, a generalization of (1.47). Similarly k-th raw moment
is a function of the central moments of order up to k. These statements follow by expanding the products
in (2.103) and inverting the ensuing triangular transformation. In particular for the covariance matrix,
which is the central moment of order two, we obtain:
X
X
X
Cov {Xm , Xn } = CMX
mn = RMmn RMm RMn ,
(2.104)
RMX
mn
2 X ()
.
=
m n =0
(2.105)
(2.106)
1
IE (X) ,
VN 0,I
(2.107)
VN
,
N
( 2 + 1)
(2.108)
where is the gamma function (B.80, AM 2005). With the transformation X 7 Y + BX, where
BB0 , we obtain a variable Y that is uniformly distributed on the ellipsoid E, , Y U(E, ), and
the pdf of Y is obtained by applying (2.8) to (2.107).
Solution of E 59
Using Fang et al. (1990, result 2.9) and (2.119) the characteristic function of a variable X uniformly
distributed on the unit sphere in RN is given by:
o
n 0
n 0 o
() E ei X = E ei XN
Z +
0
=
ei xN f (xN )dxN
=
Z +1
( N2+2 )
N 1
cos( 0 x)(1 x2 ) 2 dx
1
N +1
( 2 ) 2 1
Z +
( N2+2 )
N 1
+ i N +1 1
sin( 0 x)(1 x2 ) 2 dx .
( 2 ) 2
(2.109)
The last term vanishes due to the symmetry of (1 x2 ) around the origin. From (B.89, AM 2005) and
(B.82, AM 2005) we have:
B
1 N +1
,
2
2
( 21 )( N2+1 )
=
=
( N2+2 )
( N2+1 )
.
( N2+2 )
(2.110)
+1
N 1
cos( 0 x)(1 x2 ) 2 dx .
(2.111)
With the transformation X 7 Y + BX, where BB0 , we obtain a variable Y that is uniformly
distributed on the ellipsoid E, , Y U(E, ), and the characteristic function of Y is obtained by
applying (2.36) to (2.111).
(2.112)
where from (2.259, AM 2005), R kXk and U X/ kXk are independent and U is uniformly
distributed on the surface of the unit ball E0N ,IN . From (2.228) in E 80 we obtain:
E {X} = E {R} E {U} = 0 .
(2.113)
Cov {X} = E R2 UU0 = E R2 Cov {U} .
(2.114)
Similarly:
35
(2.115)
r Nr
N 1
Z
dr = N
rN +k1 dr =
N
.
N +k
(2.116)
Cov {X} =
N IN
IN
=
.
N +2 N
N +2
(2.117)
= E {RUn1 RUnk }
= E Rk E {Un1 Unk }
N
E {Un1 Unk } ,
=
N +k
(2.118)
and then using (2.227). With the transformation X 7 Y + BX, where BB0 , we obtain a
variable Y that is uniformly distributed on the ellipsoid E, , Y U(E, ), and the expected value of
Y is obtained by applying (2.56, AM 2005) to (2.113) and the covariance is obtained by applying (2.71,
AM 2005) to (2.117).
f (xK+1 , . . . , xN ) =
( N2+2 )
( K+2
2 )
N K
2
N
X
! K2
x2n
(2.119)
n=K+1
where:
N
X
x2n 1 .
(2.120)
n=K+1
Y () = ei 2 .
(2.121)
Solution of E 62
Consider first a univariate standard normal variable X N(0, 1). Its characteristic function reads:
() E{eiX }
Z +
x2
1
eix e 2 dx
=
2
Z +
2
1
1
=
e 2 (x 2ix) dx
2
Z +
2
2
1
1
=
e 2 [(xi) + ] dx
2
Z +
2
1
12 2 1
=e
e 2 (xi) d(x i)
2
1
= e 2
(2.122)
Consider now a set of N independent standard normal variables X (X1 , . . . , Xn )0 . By definition, their
juxtaposition is a standard N -dimensional normal random vector:
X N(0, I) .
(2.123)
Therefore:
N
N
n 0 o Y
Y
1 2
1 0
e 2 n = e 2 .
() E ei X =
E ein Xn =
(2.124)
n=1
n=1
RN
2
| 0 s| m (s)ds .
(2.125)
37
Solution of E 63
First note that we have:
Z
1
ss m (s)ds
4
0
RN
ss0
RN
N
X
(vn ) + (vn ) (s)ds
n=1
N
X
1
1
1
1
1
vn vn0 = VV0 = E 2 2 E0
2 n=1
2
2
1
.
2
(2.126)
Therefore:
Z
| 0 s| m (s)ds =
RN
( 0 s)(s0 )m (s)ds
Z
0
0
=
ss m (s)ds
RN
(2.127)
RN
1 0
.
2
0
1 0
2
| 0 s| m (s)ds = ei 2 N
, () .
(2.128)
(2.129)
R
where and are arbitrary. Write a MATLAB
script in which you generate a large number of scenarios {Xj }j=1,...,J from the distribution (2.129) in such a way that the sample mean and covariance:
J
1X
Xj ,
J j=1
J
X
b 1
(Xj )(Xj )0 ,
J j=1
(2.130)
satisfy:
b ,
b .
(2.131)
Hint. At a certain point, you will need to solve a Riccati align, which can be solved as follows. First
define the Hamiltonian matrix
H
b
0
.
(2.132)
(2.133)
where UU0 I and T is upper triangular with the eigenvalues of H on the diagonal sorted in such a
way that the first N have negative real part and the remaining N have positive real part; the terms in this
R
decomposition are similar in nature to principal components and are computed by MATLAB
. Then the
solution of the Riccati align (2.138) reads:
B ULL U1
UL ,
(2.134)
where UU L is the upper left N N block of U and ULL is the lower left N N block of U.
Solution of E 64
First produce an auxiliary set of scenarios:
e j}
{Y
j=1,..., J
(2.135)
from the distribution N(0, ). Then complement these scenarios with their opposite
(
ej
Y
ej
Y
e J
Y
j
if 1 j J/2
if J/2 + 1 j J .
(2.136)
These antithetic variables still represent the distribution N(0, ), but they are more efficient as they satisfy
e j , which again preserves
the zero-mean condition. Next apply a linear transformation to the scenarios Y
normality:
e j,
Yj BY
j = 1, . . . , J .
(2.137)
For any choice of the invertible matrix B, the sample mean is null. To determine B we impose that the
sample covariance matches the desired covariance. Using the affine equivariance of the sample covariance
which follows from (4.42, AM 2005), (4.36, AM 2005), (2.67, AM 2005) and (2.64, AM 2005), we obtain
the matrix Riccati align:
b
BB,
B B0 .
(2.138)
With the solution (2.134) we can perform the affine transformation (2.137) and finally generate the desired
scenarios:
Xj + Yj ,
j = 1, . . . , J ,
(2.139)
R
which satisfy (2.131). See the MATLAB
function MvnRnd and the script S_ExactMeanAndCovariance for an
implementation of this methodology.
39
f N (u1 , u2 ) = p
exp(g (u1 , u2 )) ,
1 2
(2.140)
where:
0
erf 1 (2u1 1)
g (u1 , u2 )
erf 1 (2u2 1)
erf 1 (2u1 1)
.
erf 1 (2u2 1)
1
1
1
1
0
0
1
!
(2.141)
Solution of E 65
From (2.30, AM 2005), the pdf of the normal copula reads:
f N (u1 , u2 ) =
N
f,
(QN
(u1 ), QN
(u2 ))
1 , 2
2 , 2
1
fN1 ,2 (QN
(u1 ))fN2 ,2 (QN
(u2 ))
1 , 2
2 , 2
1
(2.142)
where Q is the quantile (1.70, AM 2005) of the marginal one-dimensional normal distribution:
QN
, 2 (u) = +
2 2 erf 1 (2u 1) .
(2.143)
N
From the expression (2.170, AM 2005) of the two dimensional joint normal pdf f,
we obtain:
N
N
f,
(QN
1 , 2 (u1 ), Q2 , 2 (u2 )) =
1
1 z2 +z2
(12 22 (1 2 )) 2 21 z1 2z
(12 )
e
,
2
2
(2.144)
where:
zi
2 erf 1 (2ui 1)
(i = 1, 2) .
(2.145)
On the other hand, from the expression (1.67, AM 2005) of the marginal pdf we obtain:
1
2 2
fNi ,2 (QN
e
i , 2 (ui )) = (2i )
i
zi2
2
(2.146)
Therefore:
f N (u1 , u2 ) = p
where:
1
1 2
exp(g (u1 , u2 )) ,
(2.147)
0
erf 1 (2u1 1)
g (u1 , u2 )
erf 1 (2u2 1)
erf 1 (2u1 1)
.
erf 1 (2u2 1)
1
1
1
1
0
0
1
!
(2.148)
(2.149)
R
Write a MATLAB
function that computes m E {X}, S Cov {X} and C Corr {X} as functions
of the generic inputs , .
Solution of E 66
R
function LognormalParam2Statistics.
See the MATLAB
K
2
NK
2
|N |
N
2
e 2 tr{SK
1
|SK |
(XM)0 1
N (XM)}
(2.150)
Solution of E 67
From the definition (2.180, AM 2005) and the definition of the normal pdf (2.156, AM 2005) we have:
f (X) f (vec(X))
(2)
e
NK
2
|SK N |
12
(2.151)
From the property (A.102, AM 2005) of the Kronecker product we can write:
|SK N |
12
N
2
= |SK |
K
2
|N |
(2.152)
Furthermore, from the property (A.101, AM 2005) of the Kronecker product, we can write:
1
(SK N )1 = S1
K N .
(2.153)
NK
2
|SK |
N
2
K
2
|N |
0
e 2 {(vec(X)vec(M)) (SK
1
1
N )(vec(X)vec(M))}
(2.154)
.
41
N 1
N ,
K S1
K ,
(2.155)
and recalling the definition (A.96, AM 2005) of the Kronecker product, and the definition (A.104, AM
2005) of the "vec" operator, the term in curly brackets in (2.154) can be written as follows:
{ } vec(Y)0 (K N ) vec(Y)
11
..
(1) 0
(K) 0
..
(Y
Y
)
.
.
K1
X
0
=
Y(k) (kj )Y(j)
1K
Y(1)
..
..
.
.
(K)
KK
Y
(2.156)
k,j
Ynk kj nm Ymj
n,m,k,j
mn Ynk kj Ymj
n,m,k,j
= tr {YY0 } = tr {Y0 Y} .
= Cov X(j1)N +m , X(k1)N +n
= (SK N )(j1)N +m,(k1)N +n
S11 S1K
..
..
..
=
.
.
.
SK1
SKK
(2.157)
(j1)N +m,(k1)N +n
= Sj,k m,n .
This proves that if X N(M, , S) then:
n
o
Cov X(j) , X(k) = Sj,k .
On the other hand, from the following identities:
(2.158)
f (X) (2)
NK
2
= (2)
NK
2
K
2
|N |
N
2
|SK |
|N |
N
2
e 2 tr{SK
K
2
e 2 tr{N
|SK |
(XM)0 1
N (XM)}
0
(XM)S1
K (XM) }
(2.159)
,
we see that if X N(M, , S), then X0 N(M0 , S, ). Using (2.158) and the fact that the columns of
X0 are the rows of X we thus obtain:
Cov X(m) , X(n) = mn S .
(2.160)
(2.161)
where the term on the left hand side is the matrix-variate Student t distribution (2.198, AM 2005) and the
term on the right hand side is the matrix-variate normal distribution (2.181, AM 2005).
Note. The above result immediately proves the specific vector-variate case. Indeed, from (2.183, AM
2005) and (2.201, AM 2005) we obtain:
St(, m, ) = St(, m, , 1) = N(m, , 1) = N(m, ) .
(2.162)
In turn, since the vector-variate pdf (2.188, AM 2005) generalizes the one-dimensional pdf (1.86, AM
2005) we also obtain St(, m, 2 ) = N(m, 2 ).
Note. The generalization of the Student t distribution to matrix-variate random variables was studied by
Dickey (1967). Our definition of the pdf corresponds in the notation of Dickey (1967) to the following
special case:
pN,
q K,
m+N,
Q S ,
P 1 .
(2.163)
If X St(, M, , S) then:
E {X} = M
n
o
(2.164)
Solution of E 69
To prove (2.161) we start using (A.122, AM 2005) in the definition (2.199, AM 2005) of the pdf of a
matrix-valued Student distribution St(, M, , S). In the limit we obtain:
K
2
f (X) ||
K
2
||
N
2
|S|
N
2
|S|
43
+N
1
2
IK + S1 (X M)0 (X M)
+N
2
1
1
0 1
,
1 + tr(S (X M) (X M))
(2.165)
1
K+1
( 2 )
( 2 )
( 2 )
NK
2
() ()
(2.166)
ex = lim
1+
x n
,
n
(2.167)
St
f,,,S
(X) || 2 |S| 2
21
1
1
0 1
1 + tr(S (X M) (X M))
K
2
||
|S|
N
2
e 2 tr(S
(XM)0 1 (XM))
(2.168)
Turning now to the normalization constant (2.166), the following approximation holds in the limit n
, see e.g. www.mathworld.com:
1
n+
n(n) .
2
(2.169)
Applying this result recursively we obtain in the limit n the following approximation:
n+N
2
n N2
2
n
2
(2.170)
Applying this to the normalization constant (2.166) we obtain in the limit the following approximation:
( ) ()
N2K
N2
2
()
NK
2
= (2)
NK
2
N2K
K +1
2
N2
(2.171)
2
.
Thus in the limit the pdf of the matrix-variate Student t distribution St(, M, , S) reads:
St
f,,,S
(X) (2)
NK
2
K
2
||
N
2
|S|
e 2 tr(S
(XM)0 1 (XM))
(2.172)
(2.173)
Solution of E 70
The logarithm of the Cauchy pdf (2.209, AM 2005) reads:
N +1
ln(1 + (x )0 1 (x )) ,
(2.174)
2
where is a constant which does not depend on x. The first order derivative of the log pdf function reads:
Ca
ln f,
(x) =
Ca
(x)
ln f,
1 (x )
= (N + 1)
.
x
1 + (x )0 1 (x )
(2.175)
1
.
N +1
(2.176)
Solution of E 71
The logarithm of the Cauchy pdf (2.209, AM 2005) reads:
N +1
ln(1 + (x )0 1 (x )) ,
2
where is a constant which does not depend on x. The Hessian of the log-Cauchy pdf reads:
Ca
ln f,
(x) =
Ca
2 ln f,
(x)
(x )0 1
=
(N
+
1)
xx0
x 1 + (x )0 1 (x )
1
(x )0 1
= (N + 1)
1
0
x
1 + (x ) (x )
1
(N + 1)
(x )0 1
x 1 + (x )0 1 (x )
= (N + 1)
1
1 + (x )0 1 (x )
(N + 1)
21 (x )(x )0 1
.
(1 + (x )0 1 (x ))2
(2.177)
(2.178)
45
= (N + 1)1 .
(2.179)
x=Mod{X}
Ca
2 f,
(x)
MDis {X}
xx0
x=Mod{X}
1
.
N +1
(2.180)
(2.181)
where the exponential is defined component-wise. Show that the pdf of the log distribution reads:
fX (ln(Y))
,
fY (Y) = QN
n=1 yn
(2.182)
and find the expression for the special case of a lognormal pdf, i.e. when X N(, ).
Solution of E 72
This is a transformation g of the form (2.7), which reads component-wise as follows:
gn (x1 , . . . , xN ) exn .
(2.183)
(2.184)
Jg = diag(ex1 , . . . , exN ) ,
(2.185)
|J | =
N
Y
exn .
n=1
(2.186)
N
g 1
Y
J (g (y)) =
yn ,
(2.187)
n=1
and the expression (2.182) follows. In particular, for a lognormal distribution, from (2.156, AM 2005)
and (2.182) we have:
12
LogN
f,
(y) =
(2) 2 ||
QN
n=1 yn
e 2 (ln(y))
(ln(y))
(2.188)
(2.189)
where the vector is defined in terms of the canonical basis (A.15, AM 2005) as follows:
n1 nk
1 (n1 )
+ + (nk ) .
i
(2.190)
Comparing with (2.13, AM 2005), we realize that the last term in (2.189) is the characteristic function of
X. Therefore we obtain:
RMY
n1 nk = X ( n1 nk ) .
(2.191)
From (2.157, AM 2005) and (2.191) we obtain the expression of the raw moments of the lognormal
distribution:
0
(
RMY
n1 nk = e
(n1 )
e 2 (
++ (nk ) )
(n1 )
(2.192)
.
In particular the expected value, which is the first raw moment, reads:
n +
E {Yn } = RMY
n =e
nn
2
(2.193)
mm
2
+ nn
2 +mn
(2.194)
47
mm
2
+ nn
2
(emn 1) .
(2.195)
(2.196)
Solution of E 74
If W is Wishart distributed then from (2.222, AM 2005) for any conformable matrix A we have:
AWA0 = AX1 X01 A0 + + AX X0 A0
= Y10 Y10 + + Y Y0
(2.197)
W(, AA ) ,
since:
Yt AXt N(0, AA0 ) .
(2.198)
In particular, we can reconcile the multivariate Wishart with the one-dimensional gamma distribution by
choosing A a0 , a row vector. In that case each term in the sum is normally distributed as follows:
Yt a0 Xt N(0, a0 a) .
(2.199)
a0 Wa Ga(, a0 a) .
(2.200)
12
1 2
1 2
22
.
(2.201)
R
Fix 1 1 and 2 1 and write a MATLAB
script in which you:
Set the inputs and ;
Generate a sample of size J 10,000 from W(, ) using the equivalent stochastic representation
(2.222, AM 2005);
Plot the histograms of the realizations of D det(W) and T tr(W) to show that indeed these
random variables are positive, see (2.236, AM 2005) and (2.237, AM 2005). Comment on whether
this is also true for 1;
Plot the 3D scatter-plot of the realizations of W11 vs. W12 vs. W22 to show the Wishart cloud.
Note that symmetry implies that a matrix is fully determined by the three non-redundant entries
(W11 , W22 , W12 ) and notice that as the degrees of freedom increases the clouds becomes less
and less "wedgy". Eventually, it becomes a normal ellipsoid, in accordance with the central limit
theorem;
Plot the separate histograms of the realizations of W11 , W12 and W22 ;
Superimpose the rescaled pdf (1.110, AM 2005) of the marginals of W11 and W22 to the respective
histograms to show that histogram and gamma pdf coincide. Indeed, from (2.230, AM 2005) the
marginal distributions of the diagonal elements of a Wishart matrix are gamma-distributed:
Wnn Ga(, nn ) ;
(2.202)
Compute and show on the command window the sample means, sample covariances, sample standard deviations and sample correlations;
Compute and show on the command window the respective analytical results (2.227, AM 2005)
and (2.228, AM 2005), making sure that they coincide.
Solution of E 75
R
script S_Wishart.
See the MATLAB
IW
f,
(Z) =
+1
1
1
1
+N
2
|| 2 |Z|
e 2 tr(Z ) .
(2.203)
Solution of E 76
If Z has an inverse-Wishart distribution:
Z IW(, ) ,
(2.204)
(2.205)
Using the following result in Magnus and Neudecker (1999) that applies to any invertible N N matrix
Q:
Q1
N (N +1)
(N +1)
2
|Q|
,
Q = (1)
we derive:
(2.206)
49
(N +1)
W
1
f,
)
1 (Z
N 1
1
(N +1)
1 2 Z1 2 e 12 tr(Z1 )
= |Z|
+1
1
1
1
+N
2
2
= || |Z|
e 2 tr(Z ) .
IW
f,
(Z) = |Z|
(2.207)
(x) dx
T t=1
RN
T Z
0
1X
ei x (xt ) (x)dx .
=
T t=1 RN
(2.208)
iT () =
T
1 X i0 xt
e
.
T t=1
(2.209)
E 78 Order statistics
R
Replicate the exercise of the MATLAB
script S_OrderStatisticsPdfStudentT assuming that the i.i.d.
variables are lognormal instead of Student t distributed.
Solution of E 78
R
See the MATLAB
script S_OrderStatisticsPdfLognormal.
(2.210)
y
0
N 2
2
g(y)dy < .
(2.211)
Solution of E 79
The family of ellipsoids centered in with shape are described by the following implicit aligns:
Ma(x, , ) = u ,
(2.212)
where Ma is the Mahalanobis distance of the point x from through the metric , as defined in (2.61,
AM 2005) and u (0, ), see (A.73, AM 2005). If the pdf fX is constant on those ellipsoids then it
must be of the form:
f, (x) = h Ma2 (x, , ) ,
(2.213)
where h is a positive function, such that the normalization condition is satisfied, i.e.:
Z
h Ma2 (x, , ) dx = 1 .
(2.214)
RN
e
Suppose we have determined such a function h. From (2.32), changing into a generic parameter
does not affect the normalization condition, and therefore the ensuing pdf is still the pdf of an elliptical
e in
e . On the other hand, if we change into a generic dispersion parameter ,
distribution centered in
order to preserve the normalization condition we have to rescale (2.213) accordingly:
v
u
u
e
i
t h 2
e .
e , )
h Ma (x,
f,
(x)
=
e
e
||
(2.215)
(2.216)
in such a way that the same functional form g is viable for any location and dispersion parameters (, ).
E{R2 }
,
N
(2.217)
(2.218)
51
We can write X = kXk U and from (2.259, AM 2005) we have that kXk and U X/ kXk are
independent and U is uniformly distributed on the surface of the unit ball. Then:
(N
Y
)
Xi2si
=E
(N
Y
i=1
)
2si
(kXk Ui )
i=1
(
=E
N
Y
!
2si
kXk
i=1
N
Y
!)
Ui2si
(2.219)
i=1
(N
)
n
o
Y
2s
2si
= E kXk
E
Ui
,
i=1
where s
PN
i=1 si .
Thus:
(N
Y
i=1
)
Ui2si
QN
=
E Xi2si
n
o .
2s
E kXk
i=1
(2.220)
(2.221)
see e.g. www.mathworld.com and references therein. For a standard multivariate normal variable X we
have:
n
o
2s
2 s
E kXk
= E (X12 + + XN
) E {Y s } ,
(2.222)
where Y 2N . Therefore from (1.109, AM 2005) we see that (2.222) is the s-th raw moment of a chisquare distribution with N degrees of freedom and thus, see e.g. www.mathworld.com and references
therein, we have:
n
o ( N + s)2s
2s
2
E kXk
=
.
( N2 )
(2.223)
N
N + 2s
(N + 2s 2)(N + 2(s 1)) n0
+s =
=
,
N +2s1
2
2
2 2
(2.224)
(2.225)
n
o
2s
E kXk
= (N + 2(s 1)) (N + 2)N
N
N
N
= 2s
+ (s 1)
+1
2
2
2
[s]
N
= 2s
.
2
(2.226)
(N
Y
)
Ui2si
i=1
N
1 Y (2si )!
,
( N2 )[s] i=1 4si si !
(2.227)
(2.228)
and
Cov {U} =
IN
,
N
(2.229)
where IN is the N N identity matrix. Consider now a generic elliptical random variable X with location
parameter and scatter parameter . To compute its central moments we write:
X + RAU ,
(2.230)
where:
AA0
A1 (X )
kA1 (X )k
1
R
A (X )
.
(2.231)
(2.232)
53
+RAU
ARU
CMX
m1 mk = CMm1 mk = CMm1 mk
N
X
n1 ,...,nk =1
N
X
n1 ,...,nk =1
N
X
(2.233)
n1 ,...,nk =1
= E Rk
N
X
n1 ,...,nk =1
E 81 Radial-uniform representation
R
script in which you:
Write a MATLAB
Generate a non-trivial 30 30 symmetric and positive matrix and a 30-dim vector ;
Generate J 10,000 simulations from a 30-dimensional elliptical random variable:
X + RAU .
(2.234)
In this expression , R, A, U are the terms of the radial-uniform decomposition, see (2.259, AM
2005). In particular, set:
R LogN(, 2 ) ,
where 0.1 and 2 0.04.
Solution of E 81
R
See the MATLAB
script S_EllipticalNDim.
(2.235)
a = a + a Z ,
Z El(0, 1, g1 ) ,
(2.236)
(2.237)
a = a + a Z ,
(2.238)
(2.239)
mn
= [I]mn .
Thus, in particular:
h
i0
h
i
v(n) 1 v(n) = 1 .
(2.240)
Due to (2.240), m satisfies (2.284, AM 2005) and thus it is defined on the surface of the ellipsoid. Also,
it trivially satisfies (2.283, AM 2005) and thus it is symmetrical.
X1
X2
N
1
2
,S
12
1 2
1 2
22
.
(2.241)
R
Fix 1 0, 2 0, 1 1, 2 1 and write a MATLAB
script in which you:
Plot the correlation between X1 and X2 as a function of (1, 1);
Use eig to plot as a function of (1, 1) the condition ratio of S, i.e. the ratio of the smallest
eigenvalue of S over its largest eigenvalue:
CR(S) 2 /1 .
(2.242)
X1
X2
LogN
1
2
,
12
1 2
1 2
22
.
R
Fix 1 0, 2 0, 1 1, 2 1 and write a MATLAB
script in which you:
(2.243)
55
R
Rely on the MATLAB
function LogNormalParam2Statistics to plot the correlation between X1
and X2 as a function of (1, 1) (notice that the correlation will not approach 1, why?);
Use eig to plot as a function of (1, 1), the condition ratio of S, i.e. the ratio of the smallest
eigenvalue of S over its largest eigenvalue:
CR(S) 2 /1 .
(2.244)
(2.245)
T
X
Xt .
(2.246)
t=1
t = 1, . . . , T ,
(2.247)
Ye
T
X
et ,
X
(2.248)
t=1
and comment on the difference between the distribution of Y versus the distribution of Ye .
Solution of E 86
From (2.194, AM 2005) the marginals read:
Xt St(, 0, 1),
t = 1, . . . , T .
(2.249)
t 6= s .
(2.250)
T
X
Xt = 10 X St(, 10 0T , 10 IT 1) .
(2.251)
t=1
Therefore:
Y St(, 0, T ) .
(2.252)
t 6= s .
(2.253)
From the central limit theorem and (1.90, AM 2005) (fix the typo with the online "Errata" at www.
symmys.com) we obtain:
e
Y N 0,
T
2
.
(2.254)
Both Y and Ye are the sum of uncorrelated identically distributed t variables. If the variables are independent, the CLT kicks in and the sum becomes normal.
Note. This only holds for > 2, otherwise the variance is not defined and the CLT does not hold. Indeed,
if = 1 we obtain the Cauchy distribution, which is stable: the sum of i.i.d. Cauchy variables is Cauchy.
If the variables are jointly Student t, they cannot be independent, even if they are uncorrelated, recall the
plot of the pdf of the Student t copula.
(2.255)
(2.256)
R
Write a MATLAB
script in which you:
Simulate and scatter-plot a large number of joint samples of X (X1 , X2 )0 ;
Superimpose to the above scatter-plot the plot of the location-dispersion ellipsoid of these variables.
R
In order to do so, feed the MATLAB
function TwoDimEllipsoid with the real inputs E {X} and
Cov {X} as they follow from the analytical results (2.227, AM 2005) and (2.228, AM 2005), do
not use the sample estimates from the simulations. Make sure that the ellipsoid suitably fits the
simulation cloud;
Fix 15 and 1 2 1 in (2.201) and plot the correlation Corr {X1 , X2 } as a function of
(1, +1). (Compare with the result of the previous point, which is a geometrical representation
of the correlation.)
57
2
=p
.
1 + 2
R
For the MATLAB
scripts, see S_WishartLocationDispersion and S_WishartCorrelation.
(2.257)
Hint. Use (2.30, AM 2005) and (2.188, AM 2005) and the built-in functions tpdf and tinv. Notice that
you will have to re-scale the built-in pdf and the built-in quantile of the standard t distribution.
R
Then, write a MATLAB
script where you call the above function to evaluate the copula pdf at a select
grid of bivariate values:
(2.258)
In a separate figure, plot the ensuing surface. Comment on the (dis)similarities with the normal copula
when 200 and comment on the (dis)similarities with the normal copula when 1 and 12 0.
What is the correlation in this case?
Solution of E 88
From (2.191, AM 2005), when the off-diagonal entries are null, the marginals are uncorrelated, if the
correlation is defined, which is true only for > 2 (see fix in the Errata). Therefore, for > 2 null
correlation does not imply independence, because the pdf is clearly not flat as 2. For 2 the
correlation simply does not exist. However the co-scatter parameter 12 can be set to zero, but this does
not imply independence because, again, the pdf is far from flat as 2. For the implementation, see the
R
MATLAB
function StudentTCopulaPdf and the script S_DisplayStudentTCopulaPdf.
E 89 Full co-dependence
R
Write a MATLAB
script in which you generate J 10,000 joint simulations for an 10-variate random
variable X in such a way that each marginal is gamma-distributed, Xn Ga(n, 1), n = 1, . . . , 10, and
such that each two entries are fully codependent, i.e. the cdf of their copula is (2.106, AM 2005).
Chapter 3
(K ,E)
2 Ct t
E t Ut
(Kt ,E)
Ct
Ut
(3.1)
= x, y), x = 0, 3);
1/2
3/2 3
x+
x + .
2
24
we can perform a
(3.2)
is of the order of a few percentage points, you can stop at the first order.
Solution of E 90
Substituting the definition (3.48, AM 2005) of the ATMF strike in the Black-Scholes pricing formula
(3.41, AM 2005) we obtain:
(Kt ,E)
Ct
(E)
(K,E)
= C BS (E t, Kt , Ut , Zt , t
)
1
E t (K,E)
= Ut 1 + erf
t
2
2
1
E t (K,E)
Ut 1 + erf
t
2
2
E t (K,E)
= Ut erf
t
.
8
(3.3)
Therefore:
(K ,E)
t t
r
=
8
erf 1
Et
58
(Kt ,E)
Ct
Ut
!
,
(3.4)
59
and using the fact that the term in the argument of the inverse error function in (3.4) is of the order of a
few percentage points, we can use the Taylor expansion at the first order, and the approximation follows.
(3.5)
Note. Repeating the argument in (3.7) we obtain that the sum of any number of independent and identically distributed random variables reads:
fX1 ++XT = fX fX ,
(3.6)
(3.7)
(3.8)
If the variables X and Y are independent, the joint pdf is the product of the marginal pdf:
fXA ,XB (xA , xB ) = fXA (xA )fXB (xB ) .
(3.9)
(3.10)
We see that this is the convolution (B.43, AM 2005) of the marginal pdf:
fXA +XB = fXA fXB .
(3.11)
(3.12)
Note. This result is not surprising. Indeed, we recall from (2.14, AM 2005) that the characteristic function
of a distribution is the Fourier transform (B.45, AM 2005) of the pdf of that distribution. Therefore:
X1 ++XT = F [fX1 ++XT ] ,
(3.13)
and using the the expression of the pdf of the sum (3.6) and the relation between convolution and the
Fourier transform (B.45, AM 2005) we obtain:
X1 ++XT = F [fX fX ] = (F [fX ])T = (X ())T ,
(3.14)
(3.15)
In case the distribution of X is known through its pdf we only need to apply once the inverse Fourier
transform F 1 and once the Fourier transform F:
fX1 ++XT = F 1 (F [fX ])T .
(3.16)
Solution of E 92
Using the factorization (2.48, AM 2005) of the characteristic function of independent variables we obtain:
n 0
o
X1 ++XT () E ei (X1 ++XT )
n 0
o
0
= E ei X1 ei XT
n 0 o
n 0 o
= E ei X1 E ei XT .
(3.17)
X1 ++XT () = (X ())T ,
(3.18)
Therefore:
61
E {XT,e } .
e
(3.19)
XT , = (XT ,e ) e .
(3.20)
XT ,e ()
= (XT ,e ()) e 1
.
e
(3.21)
From (2.98), evaluating this derivatives at the origin and using the fact that:
n 0 o
X (0) E eiX 0 = 1 ,
(3.22)
E {XT, } = i
XT ,e (0)
XT , (0)
i
.
=
(3.23)
E {XT,e } .
e
(3.24)
(Cov {XT,e }) .
e
(3.25)
XT , = (XT ,e ) e .
(3.26)
1 XT ,e ()
e
=
(XT ,e ())
0
e
0
XT ,e () XT ,e ()
=
1 (XT ,e ()) e 2
e e
()
XT ,e
.
+ (XT ,e ()) e 1
e
0
(3.27)
From (2.98), evaluating these derivatives at the origin and using the fact that:
n 0 o
X (0) E eiX 0 = 1 ,
(3.28)
2 XT , (0)
E XT, X0T, =
0
XT ,e (0)
XT ,e (0) XT ,e (0)
=
1
.
0
e e
e 0
(3.29)
0
XT ,e (0) 2 XT ,e (0) XT ,e (0)
+
0
0
e
e
XT ,e (0) XT ,e (0) XT ,e (0)
=
0
0
XT ,e (0)
XT ,e (0)
XT ,e (0)
=
i
i
+
.
e
0
0
Using again (2.98) in (3.30) we obtain:
(3.30)
63
0
E {XT,e } E {XT,e } + E XT,e X0T,e
e
= (Cov {XT,e }) .
e
Cov {XT, } =
(3.31)
Solution of E 95
R
See the MATLAB
script S_MultiVarSqrRootRule.
(3.32)
Assume that the invariants satisfy the accordion property (3.60, AM 2005). Prove that the distribution
of the market invariants at any generic investment horizon is Cauchy. Draw your conclusions on the
propagation law of risk in terms of the modal dispersion (2.212, AM 2005).
Hint. Like the normal distribution, the Cauchy distribution is stable. Use the characteristic function
(2.210, AM 2005) to represent this distribution at any horizon. Notice that the covariance is not defined.
Solution of E 96
From (3.64, AM 2005) and (2.210, AM 2005) we obtain:
() = (e ()) e
0
= (ei
=e
i0 e
) e
(3.33)
0 ( e )2
Therefore:
X Ca
2
, 2
e e
.
(3.34)
MDis {X} =
2
MDise {X} .
e2
(3.35)
Therefore, the propagation law for risk is linear in the horizon, instead of being proportional to the square
root of the horizon.
(3.36)
p
E{(X X )2 } ,
(3.37)
(3.38)
4
kuX E{(X X )4 }/X
,
(3.39)
n
X E{(X X )n }/X
,
n 3.
(3.40)
Consider the projected invariant, defined as the sum of k intermediate single-period invariants:
Y = X1 + + Xk .
(3.41)
Such rule applies e.g. to the compounded return (3.11, AM 2005), but not to the linear return (3.10, AM
2005). Project the single-period statistics (3.36)-(3.40) to the arbitrary horizon k, i.e. compute the first n
standardized summary statistics for the projected invariant Y :
(5)
(n)
Y , Y , skY , kuY , Y , . . . , Y ,
(3.42)
(n)
X , X , skX , kuX , X , . . . , X .
Hint. Use the central moments, see (1.48, AM 2005):
(3.43)
CMX
1 X ,
65
n
CMX
n E{(X E{X}) } ,
n = 2, 3, . . . ,
(3.44)
n = 1, 2, . . . ,
(3.45)
dn ln(E{ezX })
,
dz n
z=0
n = 1, 2, . . . .
(3.46)
(n)
X = RMX
n
n1
X
n1
k1
(k)
X RMX
nk ,
(3.47)
k=1
(n)
X , X , skX , kuX , X , . . . , X .
(3.48)
X
Step 1. We compute from (3.48) the central moments CMX
1 , . . . , CMn of Xt . To do so, notice
X
2
and that from (3.40) we obtain:
from the definition of central moments (3.44) that CM2 X
(n)
n
CMX
n = X X ,
n 3.
(3.49)
X
Step 2. We compute from the central moments the raw moments RMX
1 , . . . , RMn of Xt .
(1)
(n)
Step 3. We compute from the raw moments the cumulants X , . . . , X of Xt . To do so, we
(1)
zX
start from X = RMX
} E{1 + zX} =
1 : this follows from the Taylor approximations E{e
X
1 + z RM1 for any small z and ln(1 + x) x for any small x, and from the definition of the first
cumulant in (3.46). Then we apply recursively the identity (3.47).
(1)
(n)
(1)
(n)
Step 4. We compute from the cumulants X , . . . , X of Xt the cumulants Y , . . . , Y of the
projection Y X1 + + X . To do so, we notice that for any independent variables X1 , . . . , X
we have E{ez(X1 ++X ) } = E{ezX1 } E{ezX }. Substituting this in the definition of the
cumulants (3.46) we obtain:
(n)
(n)
(n)
X1 ++X = X1 + + X .
(3.50)
In particular, since Xt is an invariant, all the Xt s are identically distributed. Therefore the projected cumulants read:
(n)
(n)
Y = X .
(3.51)
(1)
(n)
Step 5. We compute from the cumulants Y , . . . , Y the raw moments RMY1 , . . . , RMYn of Y .
To do so, we use recursively the identity:
(n)
RMYn = Y +
n1
X
n1
k1
(k)
Y RMYnk ,
(3.52)
k=1
(n)
Y , Y , skY , kuY , Y , . . . , Y ,
(3.53)
of the projected multi-period invariant Y , by applying to Y the definitions (3.36)-(3.40). See the
R
MATLAB
script S_ProjectSummaryStatistics for the implementation
Br E {XF0 } E {FF0 }
(3.54)
Solution of E 98
From the definition (3.116, AM 2005) of the generalized r-square and (3.120, AM 2005), the regression
factor loadings minimizes the following quantity:
M E {(X BF)0 (X BF)}
X
=
E {(Xn Bnk Fk )(Xn Bnj Fj )}
n,k,j
X
E Xn2
E {Bnk Fk Xn }
n,k
E {Xn Bnj Fj } +
n,j
E {Bnk Fk Bnj Fj }
(3.55)
n,k,j
X
E Xn2 2
Bnk E {Xn Fk }
n,k
n,k,j
M
Bsl
X
= 2 E {Xs Fl } +
Bnj Bnk E {Fk Fj }
Bsl
n,k,j
X
= 2 E {Xs Fl } + 2
Bsk E {Fk Fl } .
k
(3.56)
67
(3.57)
Br = E {XF0 } E {FF0 }
(3.58)
= X E {XF0 } E {FF0 }
(3.59)
F,
where Br is given by (3.121, AM 2005). In general, the residuals do not have zero expected value:
n
o
1
E {U} = E X E {XF0 } E {FF0 } F
= E {X} E {XF0 } E {FF0 }
(3.60)
E {F} .
= E {XF0 } E {FF0 }
(3.61)
E {FF0 }
E {F} E {F0 }
(3.62)
Solution of E 100
Using (3.121, AM 2005), we can also express the covariance of the residuals with the factor as follows:
Cov {U, F} = E {UF0 } E {U} E {F0 }
o
n
1
= E (X E {XF0 } E {FF0 } F)F0
n
o
1
E X E {XF0 } E {FF0 } F E {F0 }
(3.63)
Cov {F}
E 101 Explicit factors (with a constant among the factors): recovered invariants
(www.3.4) *
Show that in the case of a regression with a constant among the factors, the regression coefficients (3.121,
AM 2005) yield the recovered invariants (3.127, AM 2005):
e r E {X} + Cov {X, F} Cov {F}1 (F E {F}) .
X
(3.64)
Solution of E 101
Assume one of the factors is a constant as in (3.126, AM 2005). Then the linear model (3.119, AM 2005)
becomes:
X a + GF + U .
(3.65)
In order to maximize the generalized r-square (3.116, AM 2005) we have to minimize the following
expression:
0
M E [X (a + GF)] [X (a + GF)]
= E {X0 X} + a0 a + E {F0 G0 GF}
0
(3.66)
X
X
X
(aj )2 + 2
aj Gjk E {Fk } 2
aj E {Xj } .
j
(3.67)
j,k
X
k
Gjk E {Fk } ,
(3.68)
69
(3.69)
jkl
aj Gjk E {Fk } 2
jk
jkl
E {Xj Gjk Fk }
jk
aj Gjk E {Fk } 2
jk
(3.70)
E {Xj Gjk Fk } .
jk
Setting to zero the first order derivative with respect to Gjk and using (3.68) we obtain:
0=
!
=
E {Fk Gjl Fl } +
E {Xj }
!
=
E {Fk Gjl Fl }
E {Gjl Fl } E {Fk }
(3.71)
n
o
= Cov [GF]j , Fk Cov {Xj , Fk } .
In matrix notation this expression reads:
G Cov {F} = Cov {X, F} ,
(3.72)
which implies:
1
(3.73)
Substituting (3.68) and (3.72) in (3.65) we find the expression of the recovered invariants:
e r E {X} + Cov {X, F} Cov {F}1 (F E {F}) .
X
(3.74)
E 102 Explicit factors (with a constant among the factors): expected value of
residuals (www.3.4) *
e r in (3.119, AM 2005) and (3.126, AM 2005) and show that:
Define the residuals as Ur X X
E{Ur } = 0 ,
Corr{F, U} = 0KN .
(3.75)
Solution of E 102
The residuals read:
e r = X Gr F ,
Ur X X
(3.76)
where:
X X E {X} ,
F F E {F} .
(3.77)
Therefore the residuals have zero expected value. The covariance of the residuals with the factors reads:
n
0o
X Gr F F
o
n 0o
n
0
= E XF Gr E FF
Cov {Ur , F} = E
(3.78)
(3.79)
E 103 Explicit factors (with a constant among the factors): covariance of residuals (www.3.4) *
Show that the covariance of the residuals (3.129, AM 2005) reads:
1
Cov {F, X} .
(3.80)
Solution of E 103
The covariance of the residual reads:
0 o
X Gr F X Gr F
n
o
n
o
n 0o
0
0
= E XX 2 E XF G0r + Gr E FF G0r
Cov {Ur } = E
n
(3.81)
Cov {F, X} .
(3.82)
71
n
o
e r = 1 tr(Corr {X, E0 F} Corr {E0 F, X}) ,
R2 X, X
N
(3.83)
(3.84)
(3.85)
where:
1
Left-multiplying (3.127, AM 2005) by DX
we obtain the recovered z-score of the original variable X:
e X D1 (X
e r E {X})
Z
X
1
= D1
(F E {F})
X Cov {X, F} Cov {F}
1
1
= Cov DX X, F Cov {F} (F E {F}) .
(3.86)
Consider the spectral decomposition (2.76, AM 2005) of the covariance of the factors:
Cov {F} EE0 ,
(3.87)
(3.88)
and satisfies EE0 = IK , the identity matrix; and is the diagonal matrix of the eigenvalues sorted in
decreasing order:
diag(1 , . . . , K ) .
(3.89)
With the spectral decomposition we can always rotate the factors in such a way that they are uncorrelated.
Indeed the rotated factors E0 F satisfy:
Cov {E0 F} = E0 Cov {F} E = E0 EE0 E = ,
(3.90)
ZF 2 E0 (F E {F}) ,
which are uncorrelated and have unit standard deviation:
(3.91)
(3.92)
(3.93)
On the other hand, the generalized r-square defined in (3.116, AM 2005) reads in this context:
n
o
n
o
eX
e r R2 ZX , Z
R2 X, X
n
o
e X )0 (ZX Z
eX )
E (ZX Z
1
tr {Cov {ZX }}
a
=1
.
N
The term in the numerator can be written as follows:
n
o
e X )0 (ZX Z
eX )
a E (ZX Z
h
i0
1
1
e
e
= E DX (X X) DX (X X)
n
o
e 0 D1 D1 (X X)
e
= E (X X)
X
X
n
o
1 1
e
e 0
= tr DX DX E (X X)(X
X)
,
(3.94)
(3.95)
1
a tr(D1
X DX [Cov {X} Cov {X, F} Cov {F}
=
=
1
1
Cov {X}) tr(D1
Cov {F, X})
X DX Cov {X, F} Cov {F}
1
1 1
tr(Corr {X}) tr(DX DX Cov {X, F} Cov {F} Cov {F, X})
1
tr(D1
X DX
o
n
o
n
1
1
1 0
2
= N tr(Cov D1
E Cov E 2 ZF , D1
X X )
X X, E ZF E
1
= N tr(Cov D1
X X, ZF Cov ZF , DX X )
(3.96)
(3.97)
73
(3.98)
where is diagonal and E is invertible. Prove that is symmetric, see definition (A.51, AM 2005).
Solution of E 105
0 (EE0 )0 = E0 E0 = EE0 = .
(3.99)
(3.100)
where is diagonal and E is invertible. Prove that is positive if and only if all the diagonal elements
of are positive, see definition (A.52, AM 2005).
Solution of E 106
For any v there exists one and only one w E0 v and w 0 v 0. Assume that all the diagonal
elements of are positive and v 6= 0. Then:
v0 v v0 EE0 v = w0 w =
N
X
wn2 n > 0 .
(3.101)
n=1
Similarly, from the above identities, if 0 < v0 v for any v 6= 0, then each n has to be positive.
(3.102)
(3.103)
where:
EK E0K
(3.104)
Consider a generic point x in RN . Since the eigenvectors of the covariance matrix are a basis of RN we
can express x as follows:
x E {X} +
N
X
n=1
n e(n) ,
(3.105)
a + Gx = E {X} +
K
X
n e(n) .
(3.106)
n=1
By substituting (3.103), (3.104) and (3.105) in the left hand side of the above relation we obtain:
E {X} +
N
X
!
n e(n)
n=1
= E {X} +
N
X
n EK E0K e(n)
(3.107)
n=1
Therefore in order prove our statement it suffices to prove that if n K then the following holds:
EK E0K e(n) = e(n) ,
(3.108)
EK E0K e(n) = 0 .
(3.109)
Both statements follow from the definition (3.157, AM 2005) of EK , which implies:
0
e(1) e(n)
..
,
.
(K) 0 (n)
e
e
(3.110)
(3.111)
(3.112)
75
Solution of E 108
The residual of the PCA dimension reduction reads:
e p (IN EK E0 )(X E {X})
Up X X
K
= RK R0K (X E {X}) ,
(3.113)
(3.114)
(3.115)
RK R0K e(n)
(K+1) 0 (n)
e
e
.
(K+1)
(N )
..
(e
e )
.
(N ) 0 (n)
e
e
(3.116)
(3.117)
RK R0K e(n) = 0 .
(3.118)
and if n K then:
Since the set of eigenvectors is a basis in RN , (3.117) and (3.118) prove (3.115). Therefore the term in
the numerator of the generalized r-square (3.116, AM 2005) of the PCA dimension reduction reads:
n
o
e p )0 (X X
e p)
M E (X X
= E {(X E {X})0 RK R0K RK R0K (X E {X})}
= E {(X E {X})0 RK R0K (X E {X})}
(3.119)
(3.120)
(3.121)
= diag(K+1 , . . . , N ) .
Substituting (3.121) in (3.119) we obtain:
N
X
M=
n .
(3.122)
n=K+1
The term in the denominator of the generalized r-square (3.116, AM 2005) is the sum of all the eigenvalues. This follows from (3.149, AM 2005) and (A.67, AM 2005). Therefore, the generalize r-square
reads:
PN
PK
n
o
e p = 1 Pn=K+1 n = P n=1 n .
R2 X, X
N
N
n=1 n .
n=1 n .
(3.123)
The residual (3.113) clearly has zero expected value. Similarly, the factors:
Fp E0K (X E {X})
(3.124)
(3.125)
(3.126)
Similarly:
(3.127)
77
X Y + U,
(3.128)
where U is the residual that the model fails to approximate. To evaluate the goodness of a model, we
introduce the generalized r-square as in Meucci (2010e):
2
Rw
{Y, X} 1
(3.129)
(3.130)
where the factors are extracted by linear combinations from the market:
F GX .
(3.131)
Then each choice of B and G gives rise to a different model Y. Determine analytically the expressions for
the optimal B and G that maximize the r-square (3.129) and verify that they are the principal components
of the matrix Cov {WX}. What is the r-square provided by the optimal optimal B and G? Then compute
the residuals U. Are the residuals correlated with the factors F? Are the residuals idiosyncratic?
Hint. See Meucci (2010e).
Solution of E 109
First, we perform the spectral decomposition of the covariance matrix:
Cov {WX} EE0 .
(3.132)
In this expression is the diagonal matrix of the decreasing, positive eigenvalues of the covariance:
diag(21 , . . . , 2N ) ,
(3.133)
and E is the juxtaposition of the respective eigenvectors, which are orthogonal and of length 1 and thus
EE0 = IN :
E (e(1) , . . . , e(N ) ) .
(3.134)
(3.135)
G E0K w .
(3.136)
The r-square (3.185) provided by the principal component solution (3.136) reads:
PK
2
2
k=1 k
.
Rw
= PN
2
n=1 n
(3.137)
The residuals are not correlated with the factors F but they are correlated with each other and therefore
they are not idiosyncratic. See all the solutions in Meucci (2010e).
Solution of E 110
R
See the MATLAB
script S_SwapPca2Dim.
(3.138)
R
Write a MATLAB
script in which you:
Choose an arbitrary dimension N and generate arbitrarily;
Generate as follows:
BB0 + 2 ,
(3.139)
79
R
b and
b as
Run the built-in MATLAB
function factoran, which outputs the estimated values of B
well as the hidden factors {fj }j=1,...,J ;
Verify that the factor analysis routine works well, i.e.:
b B
bB
b0 +
b2;
(3.140)
(3.141)
R
Write a MATLAB
script in which you:
Upload the database DB_BondAttribution with time series over the year 2009 of the above variables;
Model the joint distribution of the yet-to-be realized factors and residuals by means of the empirical
distribution:
fFT +! ,UT +!
T
1 X (ft ,ut )
,
T t=1
(3.142)
E 113 Time series factors: unconstrained time series correlations and r-square
Consider the approximation Y provided to the market X by a given model:
X Y + U,
(3.143)
where U is the residual that the model fails to approximate. To evaluate the goodness of a model, we
introduce the generalized r-square as in Meucci (2010e):
2
Rw
{Y, X} 1
(3.144)
(3.145)
where the factors F are imposed exogenously. Then each choice of B gives rise to a different model
Determine analytically the expressions for the optimal B that maximize the r-square (3.144). Then compute the residuals U. Are the residuals correlated with the factors F? Are the residuals idiosyncratic?
What is the r-square provided by the optimal optimal B?
Hint. See Meucci (2010e).
Solution of E 113
The solution reads:
1
(3.146)
The residuals and the factors are uncorrelated but the residuals are not idiosyncratic because their correlations with each other are not null. The r-square provided by the model with loadings (3.146) is:
1
2
Rw
=
(3.147)
(3.148)
where X (X1 , . . . , XN )0 are the yet to be realized returns of the stocks over next week; a
(a1 , . . . , aN )0 are N constants; F (F1 , . . . , FK )0 are the factors, i.e. the yet to be realized returns
of the industry indices over next week; B is a N K matrix of coefficients that transfers the randomness of the factors into the randomness of the risk drivers; and U (U1 , . . . , UN )0 are defined as the N
R
residuals that make (3.148) an identity. Write a MATLAB
script in which you:
Upload the database of the weekly stock returns {Xt }t=1,...,T in DB_Securities_TS, and the database
of the simultaneous weekly indices returns {ft }t=1,...,T in DB_Sectors_TS;
Model the joint distribution of X and F by means of the empirical distribution:
fX,F
T
1 X (Xt ,ft )
,
T t=1
(3.149)
where (Y) denotes the Dirac-delta, which concentrates a unit probability mass on the generic point
Y;
81
Compute the optimal loadings B in (3.148) that give the factor model the highest generalized
multivariate distributional r-square as in Meucci (2010e) (you will notice that the weights are arbitrary):
2
B argmax Rw
{BF, X} ;
(3.150)
Compute the correlations of the residuals with the factors and verify that it is null;
Compute the correlations of the residuals with each other and verify that it is not null, i.e. the
residuals are not idiosyncratic.
Hint. The optimal loadings turn out to be the standard multivariate OLS.
Note. See Meucci (2010e).
Solution of E 114
R
See the MATLAB
script S_TimeSeriesIndustries.
E 115 Time series factors: generalized time-series industry factors (see E 114)
R
Consider the same setup than in E 114. Write a MATLAB
script in which you:
Compute the optimal loadings B in (3.148) that give the factor model the highest constrained
generalized multivariate distributional r-square defined in Meucci (2010e):
2
B argmax Rw
{BF, X} .
(3.151)
BC
In this expression, assume that the constraints C are the following: all the loadings are bound from
below by B 0.8 and from above by B 1.2 and the market-capitalization weighted sum of the
loadings be one:
0.8 Bn,k B ,
N
X
n = 1, . . . , N , k = 1, . . . , K
(3.152)
Mn Bn,k 1 .
n=1
X
F
X
X
N
,
F
0XF
XF
F
,
(3.153)
where:
PT +,n
Xn ln
PT,n
ST +,k
Fk ln
.
ST,k
(3.154)
(3.155)
Z = eF 1 ,
(3.156)
R BZ + U .
(3.157)
Compute the expression of B that minimizes the generalized r-square, the expression of the covariance
Z of the explanatory factors and the expression of the covariance U of the residuals.
Solution of E 116
From (3.121, AM 2005) the optimal loadings read:
B E {RZ0 } (E {ZZ0 })1 .
(3.158)
(3.159)
then:
1
E {Y} = e+ 2 diag()
0
E {YY } = (e
+ 12 diag()
(3.160)
(e
+ 12 diag() 0
))e ,
(3.161)
R
Z
X
X
LogN
,
F
0XF
XF
F
,
(3.162)
(3.163)
83
we can easily compute all the terms E {R}, E {RR0 }, E {Z}, E {ZZ0 } and E {RZ0 }. Therefore we
obtain the loadings (3.158). The covariance of the explanatory factors then follows from:
Z = E {ZZ0 } E {Z} E {Z0 } ,
(3.164)
(3.165)
b R 6= B
b
b ZB
b0 +
bU .
Solution of E 117
R
See the MATLAB
script S_ResidualAnalysisTheory.
(3.166)
(3.167)
Solution of E 118
In this case the most explanatory interpretation reads:
R a + BZ + U ,
(3.168)
B RZ 1
ZZ
(3.169)
a E{R} B E{Z} .
(3.170)
and:
(3.171)
(3.172)
where U is the residual that the model fails to approximate. To evaluate the goodness of a model, we
introduce the generalized r-square as in Meucci (2010e):
2
Rw
{Y, X} 1
(3.173)
(3.174)
where the loadings B are exogenously chosen, but the factors F are left unspecified. Then each choice of
F gives rise to a different model. Assume that the factors are a linear function of the market:
F GX .
(3.175)
Determine analytically the expressions for the optimal G that maximize the r-square (3.173).
Solution of E 120
The solution reads:
G = (B0 B)1 B0 ,
(3.176)
tr(Cov {WBF})
,
tr(Cov {WX})
(3.177)
85
(3.178)
Y PX ,
(3.179)
P B(B0 B)1 B0
(3.180)
where:
is a projection operator. Indeed, it is easy to check that P2 = P. The linear assumption (3.175) gives rise
to the residuals:
U P X ,
(3.181)
P I B(B0 B)1 B0 ,
(3.182)
where:
is the projection in the space orthogonal to the span of the model-recovered market. Therefore, the
recovered market and the residuals live in orthogonal spaces. However, the residuals and the factors are
not uncorrelated and the residuals are not idiosyncratic, see also the discussion in Meucci (2010e).
(3.183)
where X (X1 , . . . , XN )0 are the yet to be realized returns of the stocks over next week; a
(a1 , . . . , aN )0 are N constants, F (F1 , . . . , FK )0 are the factors, i.e. the yet to be realized random
variables, B is a N K matrix of coefficients that transfers the randomness of the factors into the randomness of the risk drivers and that is imposed exogenously, and U (U1 , . . . , UN )0 are defined as the
R
N residuals that make (3.148) an identity. Write a MATLAB
script in which you:
Upload the database DB_Securities_TS of the matrix B of dummy exposures of each stock to its
industry;
Upload the database DB_Securities_IndustryClassification of weekly stock returns {Xt }t=1,...,T ;
fX
T
1 X (Xt )
,
T t=1
(3.184)
where (Y) denotes the Dirac-delta, which concentrates a unit probability mass on the generic point
Y;
Define the cross-sectional factors as linear transformation of the market F GX;
Compute the optimal coefficients G that give the factor model the highest generalized multivariate
distributional r-square defined in Meucci (2010e):
2
G argmax Rw
{BGX, X} ,
(3.185)
In this expression assume that the r-square weights matrix w to be diagonal and equal to the inverse
of the standard deviation of each stock return;
Compute the correlations of the residuals among each other and with the factors and verify that
neither is null. In other words, the model is not of systematic-plus-idiosyncratic type.
Hint. The optimal loadings turn out to be the standard multivariate weighted-OLS.
Note. See Meucci (2010e).
Solution of E 122
R
See the MATLAB
script S_CrossSectionIndustries.
2
G argmax Rw
{BGX, X} .
(3.186)
BC
In this expression, assume that the constraints C are that the factors F GX be uncorrelated with
the overall market:
C : G Cov {X} m 0 ,
(3.187)
where you can assume the market weights m to be equal weights for this exercise;
Compute the correlations of the residuals among each other and with the factors and verify that
neither is null. In other words, the model is not of systematic-plus-idiosyncratic type.
Note. See Meucci (2010e).
Solution of E 123
R
See the MATLAB
script S_CrossSectionConstrainedIndustries.
87
(3.188)
(3.189)
where B is a given vector of loadings, F is a yet-to-be defined explanatory factor, and U are residuals
R
script in which you:
that make (3.189) hold. Write a MATLAB
Choose N and generate arbitrarily the parameters in (3.188) and the vector of loadings in (3.189);
Generate a large number of simulations from (3.188);
Define the factor F through cross-sectional regression and compute the residuals U;
Show that factor and residual are correlated:
Corr {F, U} =
6 0.
(3.190)
X
F
X
X
N
,
F
0XF
XF
F
,
(3.191)
where:
PT +,n
Xn ln
PT,n
ST +,k
Fk ln
.
ST,k
(3.192)
(3.193)
In particular, assume that the compounded returns are generated by the linear model:
X X + DF + ,
(3.194)
F
N
0
0
F
,
0
0
,
(3.195)
and is diagonal. Notice that (3.194)-(3.195) is a specific case of, and fully consistent with, the more
general formulation (3.191). The specification (3.194) is the "estimation" side of the model, i.e. the
model that would be fitted to the empirical observations. We want to represent the linear returns on the
securities:
R = eX 1 ,
(3.196)
Z = eF 1 ,
(3.197)
R a + BZ + U .
(3.198)
The specification (3.198) is the interpretation side of the model, i.e. the model that would be used for
R
portfolio management applications, such as hedging or style analysis. Write a MATLAB
script in which
you:
Upload X , D, F and from DB_LinearModel;
Study the relationship between the constant X in (3.194) and the intercept a in (3.198);
Study the relationship between the loadings D in (3.194) and the loadings B in (3.198);
Determine if U idiosyncratic?
Note. See Meucci (2010c) and Meucci (2010b).
Solution of E 126
In the simple bi-variate and rescaled case such that Pt = St = 1 the returns are shifted multivariate
lognormal:
(t)
RP
(t)
RS
2
X
X
LogN t
,t
F
X,F X F
X,F X F
F2
1.
(3.199)
(3.200)
89
E {Y} = e+ 2 diag() .
1
(3.201)
Also:
Cov {Y} = E {YY0 } E {Y} E {Y0 } .
(3.202)
(t)
Var RS
(3.203)
Therefore:
2
(3.204)
R
Finally, U is not idiosyncratic, and the longer the horizon, the more pronounced this effect. See MATLAB
script S_HorizonEffect for the implementation.
ln St+ ln St
ln t+ ln t
N( , ) ;
(3.205)
Assume that the investment horizon is eight weeks. We want to represent the linear returns on the
options RC in terms of the linear returns R of the underlying S&P 500 by means of a linear model:
RC a + BR + U .
(3.206)
Notice that the specification (3.206) is the interpretation side of a "factors on demand" model.
Generate joint simulations for RC and R and scatter-plot the results;
Compute a and B by OLS;
Compute the cash and underlying amounts necessary to hedge RC based on the delta of the BlackScholes formula. Compare with a and B;
Repeat the above exercise when the investment horizon shifts further or closer in the future.
Hint. See Meucci (2010c) and Meucci (2010b).
Solution of E 127
To compute the hedge, consider the risk-neutral pricing align for a generic option (not necessarily a call
option):
O S Crt ,
(3.207)
where O is the option price; S is the underlying value; r is the risk-free rate; is the "delta":
O
,
S
(3.208)
(3.209)
C
S
O
rt + S
,
O
O
O S
(3.210)
RO a + bR ,
(3.211)
C
rt,
O
(3.212)
Then:
or:
where:
S.
O
R
See the MATLAB
script S_HedgeOptions for the implementation.
dk Zk + ,
(3.213)
kCK
91
The recursive rejection routine in Meucci (2005, section 3.4.5) to solve heuristically the above
problem by eliminating the factors one at a time starting from the full set;
The recursive acceptance routine, which is the same as the above recursive rejection, but it starts
from the empty set, instead of from the full set.
Note. See Meucci (2010c).
Solution of E 128
R
See the MATLAB
script S_SelectionHeuristics.
(3.214)
Suppose that the operator admits a one-dimensional eigenvalue/eigenfunction pair, i.e. there exist a number and a function:
h
i
S e() = e() ,
(3.215)
where the function is unique up to a constant. Using Table (B.4, AM 2005), the spectral equation (3.215)
reads explicitly as follows:
Z
(3.216)
First of all we determine the generic form of such an eigenfunction, if it exists. Expanding in Taylor
series the spectral basis and using the spectral equation we obtain
e() (x + dx) =
Z
=
(3.217)
Z
=
(3.218)
Therefore, integrating by parts and using the assumption that the matrix S vanishes at infinity, we obtain
the following identity:
de()
=
dx
Z
h
i
()
y S(x, y)e (y) dy + S(x, y)y e() (y)dy
=
Z R
=
S(x, y)y e() (y)dy ,
(3.219)
(3.220)
(3.221)
Similarly to (A.52, AM 2005) the operator is positive if for any function v in its domain the following is
true:
Z
hv, S [v]i
S(x, y)v(y)dydx 0 .
v(x)
R
(3.222)
In this case we can restate the spectral theorem in the continuum making use of the formal substitutions
in Tables B.4, B.11 and B.20 of Meucci (2005): if the kernel representation S of a linear operator satisfies
(3.221) and (3.222), then the operator
admits an orthogonal basis of eigenfunctions. In other words, then
there exists a set of functions e() () R and a set of positive values { }R such that (3.215) holds,
which is the equivalent of (A.53, AM 2005) in the continuous setting of functional analysis.
Furthermore, the set of eigenfunctions satisfies the equivalent of (A.54, AM 2005) and (A.56, AM 2005),
i.e.:
D
E Z
e() , e()
e() (x)e() (x)dx = 2 () () ,
(3.223)
where we chose a slightly more convenient normalization constant. Consider the operator E represented
by the following kernel:
E(y, ) e() (y) .
(3.224)
This is the equivalent of (A.62, AM 2005), i.e. it is a (rescaled) unitary operator, the same way as (A.62,
AM 2005) is a rotation. Indeed:
93
Z Z
Z
()
kEgk = ( e (x)g()d)( e() (x)g()d)dx
R
ZR Z R Z
()
=
( e (x)e() (x)dx)g()g()dd
R R R
Z Z
= 2
() ()g()g()dd
R R
Z
2
= 2 g()g()d = 2 kgk .
2
(3.225)
By means of the spectral theorem we can explicitly compute the eigenfunctions and the eigenvalues of a
positive and symmetric Toeplitz operator. First of all from (3.215), (3.220) and the fact that in the spectral
theorem to each eigenvalue corresponds only one eigenvector, we obtain that the following relation must
hold:
de() (x)
= g e() (x) ,
dx
(3.226)
for some constant g that might depend on . The general solution to this equation is:
e() (x) = A eg x .
(3.227)
To determine this constant, we compare the normalization condition (3.223) with (B.41, AM 2005) obtaining:
e() (x) = eix .
(3.228)
To compute the eigenvalues of S we substitute (3.228) in (3.216) and we re-write the spectral equation:
ix
Z
=
S(x, x + z)e
i(x+z)
dz = e
ix
S(x, x + z)eiz dz .
(3.229)
Now recall that S is Toeplitz and thus it is fully determined by its cross-diagonal section:
S(x, x + z) = S(0, z) h(z) ,
(3.230)
where h is symmetric around the origin. Therefore we only need to evaluate (3.229) at x = 0, which
yields:
Z
=
h(z)eiz dz .
(3.231)
In other words, the eigenvalues as a function of the frequency are the Fourier transform of the crossdiagonal section of the kernel representation (3.230) of the operator:
= F[h]() .
(3.232)
In particular, if:
h(z) 2 e|z| ,
(3.233)
then:
Z
e
|z|
cos(z)dz + i
= 2
e|z| sin(z)dz
R
+
ez cos(z)dz + 0
(3.234)
2 2
.
2 + 2
Sj,j+k r|k| ,
(3.235)
where 0 < r < 1. Show in a figure that the eigenvectors have a Fourier basis structure as in (3.217, AM
2005).
Solution of E 130
R
See the MATLAB
script S_Toeplitz.
fX (x)
N
X
fn 1n (x) ,
(3.236)
n=1
where bins 1 , . . . , N are defined as follows. First of all, we define the bins width:
2a
,
N
(3.237)
95
where a is a large enough real number and N is an even larger integer number. Now, consider a
grid of equally spaced points:
1 a + h
..
.
n a + nh
..
.
(3.238)
N 1 a h .
Then for n = 1, . . . , N 1 we define n as the interval of length h that surrounds symmetrically
the point n :
n
h
h
, n +
.
2
2
(3.239)
h
h
a ,a .
2
2
(3.240)
a, a +
This wraps the real line around a circle where the point a coincides with the point a.
As far as the coefficients fn in (3.236) are concerned, for all n = 1, . . . , N they are defined as
follows:
fn
1
h
Z
f (x)dx .
(3.241)
eix fX (x)dx .
(3.242)
Z
g(x)1n (x)dx g(a + nh) ,
(3.243)
X ()
N
X
n=1
N
X
n=1
eix 1n (x)dx
fn
R
i(a+nh)
fn he
N
X
n=1
(3.244)
2i
N
fn he
a N
( 2
n)
,
a
r (r 1)
(3.245)
obtaining:
X (r )
N
X
fn he
N
2i
N (r1)(n 2
fn he
2i
N (r1)n
n=1
N
X
ei(r1)
n=1
i(r1)
=e
2i
N (r1)
he
N
X
(3.246)
fn e
2i
N (r1)n
2i
N (r1)
n=1
2
= ei(r1)(1 N ) h
N
X
fn e
2i
N (r1)(n1)
n=1
X (r ) ei(r1) h
N
X
fn e
2i
N (r1)(n1)
(3.247)
n=1
pr (f )
N
X
fn e
2i
N (r1)(n1)
(3.248)
n=1
Its inverse, the inverse discrete Fourier transform (IDFT), is the matrix operation p 7 f which is
defined component-wise as follows:
fn (p)
N
2i
1 X
pr e N (r1)(n1) .
N r=1
(3.249)
Comparing (3.247) with (3.248) we see that the approximate cf is a simple multiplicative function
of the DFT of the discretized pdf f :
X (r ) ei(r1) hpr (fX ) .
(3.250)
(3.251)
97
where X1 , . . . , XT are i.i.d. copies of X. The cf of Y satisfies the identity Y TX , see (3.64,
AM 2005). Therefore:
Y (r ) ei(r1)T hT (pr (fX ))T .
(3.252)
On the other hand, from (3.250), the relation between the cf Y and the discrete pdf fY is:
Y (r ) ei(r1) h pr (fY ) ,
(3.253)
(3.254)
Therefore:
The values pr (fY ) can now be fed into the IDFT (3.249) to yield the discretized pdf fY of Y as
defined in (3.251).
XT + = m(1 e ) + e XT + T, ,
(3.255)
2
T, N 0, (1 e2 ) .
2
(3.256)
XT + XT + m + T, ,
(3.257)
where:
If 0 we can write:
where:
T, N(0, 2 ) ,
R
which is the standard arithmetic Brownian motion. See the MATLAB
script
for the implementation.
(3.258)
S_AutocorrelatedProcess
Solution of E 134
The invariants are the shocks in the volatility, which also directly drive the randomness of the process. NoR
tice that these invariants are not directly measurable. See the MATLAB
script S_VolatilityClustering.
Pt
Pt1
Yt Pt Pt1
2
Pt
Zt
Pt1
Xt
(3.259)
(3.260)
(3.261)
(3.262)
Determine which among Xt , Yt , Zt , Wt , can potentially be an invariant and which certainly cannot
be an invariant, by computing the histogram from two sub-samples and by plotting the locationdispersion ellipsoid of a variable with its lagged value.
Solution of E 135
R
The Wt s are clearly not invariants. See the MATLAB
script S_EquitiesInvariants.
(k)
Xt
ln(Pt+k Pt ) ,
(3.263)
where Pt are the prices at time t of the securities, see (3.11, AM 2005);
Assume a diagonal-vech GARCH(1,1) process for the one-period compounded returns:
(1)
Xt
=+
p
U
Ht t .
(3.264)
In this expression S denotes the upper triangular Cholesky decomposition of the generic symmetric and positive matrix S, t are normal invariants:
99
t N(0, I) ,
(3.265)
(3.266)
(1)
t (Xt )(Xt )0 ;
(3.267)
(T )
X0
T
X
(1)
Xt
= T +
t=1
T p
X
U
Ht t .
(3.268)
t=1
X1 = +
p
U
H1 1 .
(3.269)
where the matrix H1 is an outcome of the above estimation step. Then for each scenario we update
the next-step scatter matrix H2 according to (3.266). Next, we generate J independent scenarios for 2
from the multivariate distribution (3.265) and we generate return scenarios for the second-period returns
(1)
X2 according to (3.264). We proceed iteratively until the scenarios for all the entries in (3.268) have
been generated. Then the linear return distribution follows from the pricing align R = eX 1. See the
R
MATLAB
script S_ProjectNPriceMvGarch for the implementation.
Ct,e N(0, 2 e) ,
(3.270)
where 2 0.4. Assume that the stock currently trades at the price PT 1. Fix a generic horizon .
R
Write a MATLAB
script in which you compute and plot the analytical pdf of the price PT + .
Solution of E 137
As in (3.74, AM 2005):
Ct, N(0, 2 ) .
(3.271)
(3.272)
R
The pdf follows from (1.95, AM 2005). See the MATLAB
script S_LinVsLogReturn for the implementation.
Solution of E 138
The first order Taylor approximation reads:
PT + = PT eCT +, PT (1 + CT +, ) = PT + PT CT +, .
(3.273)
(3.274)
R
See the MATLAB
script S_EquityProjectionPricing for the implementation.
Solution of E 139
Changes in the yield curve and changes in the logarithm of the yield curve are approximately invariants,
R
script
whereas the changes in the yield to maturity of a specific bond are not. See the MATLAB
S_FixedIncomeInvariants.
()
Yt
N 0,
20 + 1.25
10, 000
2 !
,
(3.275)
where denotes the generic time to maturity (measuring time in years) and is one week. Restrict your
attention to bonds with times to maturity 1, 5, 10, 52 and 520 weeks, and assume that the current yield
R
curve, as defined in (3.30, AM 2005) is flat at 4%. Write a MATLAB
script in which you:
101
Produce joint simulations of the five bond prices at the investment horizon of one week;
Determine what are the analytical marginal distributions of the five bond prices at the investment
horizon of one week?
Produce joint simulations of the five bond linear returns from today to the investment horizon of
one week;
Determine what are the analytical marginal distributions of the five bond linear returns at the investment horizon of one week?
Comment on why the return on the price of a bond cannot be an invariant.
Hint.
Since the market is fully codependent you will only need one uniformly generated sample;
You will need the quantile function to generate simulations. Compute the quantile function using
interp1, the linear interpolation/extrapolation of the cdf.
For a generic bond with time to maturity from the decision date T the expiry date is E T + .
As in (3.81, AM 2005) the price at the investment horizon of that bond reads:
(T +)
ZT +
(T + )
= ZT
exp(( ) Y ( ) ) .
(3.276)
In other words, the price is determined by the market invariant, a random variable, and the known
price of a different bond with shorter time to maturity.
The distribution of the 4-week-to-maturity bond at the 4-week-horizon is degenerate, i.e. its pdf is
the Dirac delta, because the outcome is deterministic. Make sure that your outcome is consistent
with this statement.
Solution of E 140
From (3.276) the bond price reads:
(T +)
ZT +
= eX ,
(3.277)
where:
(T + )
X ln(ZT
) ( ) Y ( ) .
(3.278)
From (3.275):
Y () N(0, 2 ) ,
(3.279)
where:
20 + 1.25
10, 000
2
.
(3.280)
Therefore:
(T + )
X N(ln(ZT
which with (3.277) implies:
2
), ( )2
),
(3.281)
(T +)
ZT +
(T + )
LogN(ln(ZT
2
), ( )2
).
(3.282)
From (3.10, AM 2005) the linear return from the current time T to the investment horizon T + of a
bond that matures at E T + is defined as:
(T +)
(T +)
LT +,
ZT +
(T +)
1.
(3.283)
ZT
Proceeding as above:
LT +, = eY 1 ,
(3.284)
where:
(T + )
Y ln(ZT
(T +)
) ln(ZT
) ( ) Y ( ) ,
(3.285)
and thus:
(T +)
(T + )
1 + LT +, LogN(ln(ZT
(T +)
) ln(ZT
2
), ( )2
),
(3.286)
(T +)
or LT +, is a shifted lognormal random variable. Notice that for the above distribution is
degenerate, i.e. deterministic, whereas any estimation would have yielded a non-degenerate distribution.
R
script S_BondProjectionPricingNormal for the implementation.
See the MATLAB
(3.287)
8,
0,
2
5
20 + 104 .
4
(3.288)
Consider bonds with current times to maturity 4, 5, 10, 52 and 520 weeks, and assume that the current
R
yield curve, as defined in (3.30, AM 2005) in is flat at 4% (measuring time in years). Write a MATLAB
script in which you:
R
Use the MATLAB
function ProjectionStudentT that takes as inputs the estimation parameters
of the t-distributed invariants and the horizon-to-estimation ratio /e
to compute the cdf of the
invariants at the investment horizon . You do not need to know how this function works. Make
sure you properly compute the necessary inputs (see hints below).
103
Use the cdf obtained above to generate a joint simulation of the bond prices at the investment
horizon of four weeks.
Plot the histogram of the linear returns LT +, of each bond over the investment horizon, where
the linear return is defined consistently with (3.10, AM 2005) as follows:
(E)
Lt,
Zt
(E)
1.
(3.289)
Zt
Notice that the long-maturity (long duration) bonds are much more volatile than the short maturity
(short duration) bonds.
Hint. Suppose today is November 1st, 2006, and we hold a zero-coupon bond that matures in 10 weeks,
i.e., it matures on Jan.15. 2007. We are interested in the value of the bond after 4 weeks, i.e., Dec.1 2006.
Recall from (3.30, AM 2005) that the value of a zero-coupon bond is fully determined by its yield to
maturity. At the 4-week investment horizon (Dec.1 2006) our originally 10-week bond will be a 6-week
bond. Therefore its price will be fully determined by the value of the 6-week yield to maturity on Dec.1
2006. For instance, if the 6-week yield to maturity on Dec.1 2006 is 4.1%, then in (3.30, AM 2005) we
(6/52)
have 6/52 and Y12/1/2006 0.041. Therefore the bond price on Dec.1 2006 will be:
1/15/2007
6 (6/52)
Y
)
52 12/1/2006
6
= e 52 0.041 0.99528 .
Z12/1/2006 = exp(
(3.290)
To summarize, in order to price the 10-week bond at the 4-week investment horizon we need the distribution of the 6-week yield to maturity on Dec.1 2006. In order to proceed, we recall that in the zero-coupon
bond world, the invariants are the non-overlapping changes in yield to maturity, for any yield to maturity,
see the textbook from pp.109 to pp.113. In particular, from (3.31, AM 2005), the following four random
variables are i.i.d.:
(6/52)
(6/52)
X1 Y11/08/2006 Y11/1/2006
X2
(6/52)
Y11/15/2006
X3
(6/52)
Y11/22/2006
X4
(6/52)
Y12/1/2006
(3.291)
(6/52)
Y11/08/2006
(3.292)
(6/52)
Y11/15/2006
(3.293)
(6/52)
Y11/22/2006
(3.294)
(6/52)
Notice that we can express the random variable Y12/1/2006 in (3.290) as follows:
(6/52)
(6/52)
Y12/1/2006 = Y11/1/2006 + X1 + X2 + X3 + X4 .
(3.295)
(6/52)
(3.296)
12/15/2006
where in the last row we used (a, AM 2005)gain. The term Z11/1/2006 is the current value of a 6-week
zero-coupon bond, which is known. Indeed, using the information that the curve is currently flat at 4%
we obtain:
12/15/2006
(6/52)
(3.297)
We are left with the problem of projecting the invariant, i.e. computing the distribution of X1 + X2 +
X3 + X4 , and pricing it, i.e. computing the distribution of e6/52(X1 +X2 +X3 +X4 ) in (3.296). To project
the invariant we need to compute the distribution of the sum four independent t variables:
d
2
X1 = X2 = X3 = X4 St(, , 6/52
),
(3.298)
where the parameters follow from (3.287). This is the FFT algorithm provided in ProjectionStudentT. The
pricing is then performed by Monte Carlo simulations. As for the linear returns, these are the return on
our bond over the investment horizon. Therefore (3.289) reads:
(1/15/2007)
4
L12/1/2006, 52
Z12/1/2006
(1/15/2007)
1.
(3.299)
Z11/1/2006
Solution of E 141
R
See the MATLAB
script S_BondProjectionPricingStudentT.
b t +b
b + BZ
Zt+1 a
t+1 ,
(3.300)
105
Upload the time series of the underlying and the implied volatility surface provided in DB_ImplVol;
Fit a joint normal distribution to the weekly invariants, namely the log-changes in the underlying and the residuals from a vector autoregression of order one in the log-changes in the implied
volatilities surface t :
ln St+ ln St
ln t+ ln t
N( , ) ;
(3.301)
Generate simulations for the invariants and jointly project underlying and implied volatility surface
to the investment horizon;
Price the above simulations through the full Black-Scholes formula at the investment horizon, assuming a constant risk-free rate at 4%;
Compute the joint distribution of the linear returns of the call options, as represented by the simulations: the current prices of the options can be obtained similarly to the prices at the horizon by
assuming that the current values of underlying and implied volatilities are the last observations in
the database;
For each call option, plot the histogram of its distribution at the horizon and the scatter-plot of its
distribution against the underlying;
Verify what happens as the investment horizon shifts further in the future.
Hint. You need to interpolate the surface at the proper strike and time to maturity, which at the horizon
has shortened.
Solution of E 143
R
See the MATLAB
script S_CallsProjectionPricing.
Chapter 4
(4.1)
(,)C
where the set of constraints C imposes that is symmetric and positive definite and that the average
Mahalanobis distance is one, see (4.49, AM 2005).
Solution of E 145
p
From (A.77, AM 2005) the volume of the ellipsoid E, is proportional to ||. Therefore, defining
1 , the optimization problem (4.48, AM 2005) (in log) becomes:
b argmin ln || ,
(b
, )
(4.2)
(,)C
C1 :
T
1X
(xt )0 (xt ) = 1
T t=1
C2 :
(4.3)
symmetric, positive .
We solve neglecting C2 and we check later that C2 is satisfied. The Lagrangian reads:
"
#
T
1X
0
L ln ||
(xt ) (xt ) 1 .
T t=1
The first order condition with respect to is:
106
(4.4)
0N 1 =
T
L
2X
=
(xt ) ,
T t=1
107
(4.5)
T
1X
b.
xt E
T t=1
(4.6)
As for the first order condition with respect to , we see from (A.125, AM 2005) that if A is symmetric,
then the following identity holds:
ln |A|
= A1 .
A
(4.7)
Therefore:
0N N =
T
X
L
= 1
(xt )(xt )0 ,
T t=1
(4.8)
b satisfies:
from which we see that the optimal
b =
T
X
(xt )(xt )0
T t=1
!1
1 d 1
Cov .
(4.9)
from which = N .
(4.10)
E 146 Ordinary least squares estimator of the regression factor loadings (www.4.1)
b in (4.52, AM 2005) provides the best fit to
Show that the ordinary least squares (OLS) estimator B
the observations, in the sense that it minimizes the sum of the square distances between the original
b t:
observations ft and the recovered values Bf
X
2
b = argmin
B
kxt Bft k ,
(4.11)
B
T
X
b t )f 0 .
(xt Bf
t
(4.12)
t=1
The solution to this set of equations are the OLS factor loadings (4.52, AM 2005).
T
s=1 ws
T
X
b = 1
b ) (xt
b )0 ,
wt (xt
T t=1
(4.13)
(4.14)
and:
0
(4.15)
with 1 .
Hint.
Mt2
= 2 (xt )
Mt2
0
= (xt ) (xt ) .
(4.16)
(4.17)
Solution of E 147
b we have to maximize the likelihood function (4.66, AM 2005)
b and
To compute the ML estimators
over the following parameters set:
RN {symmetric, positive N N matrices} .
(4.18)
First of all it is equivalent, though easier, to maximize the logarithm of the likelihood function. Secondly
we neglect the constraint that and lie in and verify ex-post that the unconstrained solution belongs
to . Third, it is easier to compute the ML estimators of and . The ML estimator of is simply the
inverse of the estimator of by the invariance property (4.70, AM 2005) of the ML estimators.
From (4.74, AM 2005) the log-likelihood reads:
ln (f (iT )) =
T
X
ln f (xt ) =
t=1
T
X
T
ln || +
ln g Mt2 .
2
t=1
(4.19)
0N 1
" T
#
T
T
X
X
g 0 Mt2 Mt2
X
2
=
ln g Mt
=
=
wt (xt ) ,
t=1
g (Mt2 )
t=1
t=1
109
(4.20)
T
s=1 ws
(4.21)
0N N
T
T ln || X g 0 Mt2 Mt2
+
=
2
g (Mt2 )
t=1
T
(4.22)
T
1X
0
= 1
wt (xt ) (xt ) ,
2
2 t=1
where in the last row we used (4.17) and the fact that from (A.125, AM 2005) for a symmetric matrix
we have:
ln ||
= 1 .
(4.23)
T
X
b
b 1 = 1
b ) (xt
b )0 .
wt (xt
T t=1
(4.24)
This matrix is symmetric and positive definite, and thus the unconstrained optimization is correct.
E 148 Maximum likelihood estimation for univariate elliptical distributions (see E 147)
Consider the same setup than in E 147 but assume now that X El(, 2 , g), where we know the functional form of g. Compute the maximum likelihood (ML) estimators
b and
b2 of and 2 respectively.
Hint. Define:
Mt2 2 (xt )2 ,
(4.25)
and use:
Mt2
= 2 2 (xt )
Mt2
= (xt )2 .
2
(4.26)
(4.27)
Solution of E 148
To compute the ML estimators
b and
b2 we have to maximize the likelihood function as in (4.66, AM
2005), which after (4.74, AM 2005) reads:
(b
,
b ) argmax
T
X
ln
, 2 t=1
1
g
2
(xt )2
2
,
(4.28)
where the parameter set is R R+ . We neglect the constraint that 2 be positive and verify ex-post
that the unconstrained solution satisfies this condition. It is easier to compute the ML estimators of
and 2 1/ 2 . The ML estimator of 2 is simply the inverse of the estimator of 2 by the invariance
property (1.70, AM 2005) of the ML estimators. The log-likelihood reads:
ln(f (iT )) =
T
T 2 X
ln g(Mt2 ) .
ln +
2
t=1
(4.29)
X
X
2
0=
[ln(f (iT ))] =
ln f (xt ) =
ln g(Mt )
t=1
t=1
=
T
X
g 0 (M 2 ) M 2
t
t=1
g(Mt2 )
T
X
(4.30)
wt 2 (xt ) ,
t=1
(4.31)
PT
wt xt
b = Pt=1
.
T
s=1 ws
(4.32)
wt 2
The solution to this equations is:
T 1
1X
wt (xt )2 ,
2 2
2 t=1
where in the last row we used (4.27). Thus the solution to (4.33) reads:
(4.33)
b2
T
1
1X
=
wt (xt
b)2 .
c2 )
T t=1
(
111
(4.34)
E 149 Maximum likelihood estimator of explicit factors under conditional elliptical distribution (www.4.2)
Show that the maximum likelihood (ML) estimator of B and of the explicit factor model (4.86, AM
2005) under the assumption that the conditional distribution of the perturbations is elliptical:
Ut |ft El(0, , g) .
(4.35)
b =
B
" T
X
wt xt ft0
#" T
X
t=1
#1
wt ft ft0
(4.36)
t=1
T
X
b )(xt Bf
b )0 .
b = 1
wt (xt Bf
T t=1
(4.37)
Solution of E 149
From the property (2.270, AM 2005) of elliptical distribution this implies that the conditional distribution
of the invariants is elliptical with the same density generator:
Xt |ft El (Bft , , g) .
(4.38)
b and ,
b we define 1 and we maximize
To compute the maximum likelihood (ML) estimators B
the log-likelihood function:
ln (f (iT ))
T
X
T
ln g Mt2 ,
ln || +
2
t=1
(4.39)
where:
0
(4.40)
t=1
T
X
t=1
g (Mt2 ) B
wt (xt Bft ) ft0 ,
(4.41)
where:
g 0 Mt2
wt 2
.
g (Mt2 )
(4.42)
B=
" T
X
wt xt ft0
#" T
X
t=1
#1
wt ft ft0
(4.43)
t=1
0N N
T
T ln || X g 0 Mt2 Mt2
=
+
2
g (Mt2 )
t=1
(4.44)
1X
T
0
wt (xt ) (xt ) ,
= 1
2
2 t=1
where in the last row we used (4.17) and the fact that from (A.125, AM 2005) for a symmetric matrix
we have:
ln ||
= 1 .
(4.45)
T
X
b
b 1 = 1
b ) (xt
b )0 .
wt (xt
T t=1
(4.46)
This matrix is symmetric and positive definite, and thus the unconstrained optimization is correct.
Xt
Ft
N
X
F
,
2
X
X F
X F
F2
.
(4.47)
(4.48)
Ut |ft N(0, 2 ) .
(4.49)
where:
113
What is the conditional model (4.48)-(4.49) ensuing from (4.47)? Consider the conditional model (4.48)b given the observations
(4.49) for the invariants. Compute the ML estimators of the factor loadings (b
, )
iT {x1 , f1 , . . . , xT , fT }.
Hint. Consider ft0 (1, ft ) and:
T
X
b XF 1
xt ft0 ,
T t=1
T
X
bF 1
ft f 0 .
T t=1 t
(4.50)
Solution of E 150
See (2.173, AM 2005) and/or (3.130, AM 2005)-(3.131, AM 2005) to derive:
X
X
F
F
(4.51)
X
F
2
2
X (1 2 ) .
(4.52)
(4.53)
(4.54)
E 151 Explicit factors: maximum likelihood estimator of the factor loadings (see E 150)
Consider the same setup than in E 150. Compute the joint distribution of the ML estimators of the factor
b under the conditional model (4.48)-(4.49).
loadings (b
, )
Solution of E 151
Follow the proof of (4.129, AM 2005) to derive in terms of a (degenerate) matrix-valued normal distribution:
2
b N (, ), ,
b 1 .
(b
, )
F
T
(4.55)
b
b
N
,
2 b 1
T F
.
(4.56)
E 152 Explicit factors: maximum likelihood estimator of the dispersion parameter (see E 150)
Consider the same setup than in E 150. Compute the distribution of the ML estimator
b2 of the dispersion
parameter that appears in (4.49).
Solution of E 152
Follow the proof of (4.130, AM 2005) and use (2.230, AM 2005) to derive:
T
b2 Ga(T 2, 2 ) .
(4.57)
(b
)
T 2p
,
b2
b2
(4.58)
b
and the distribution of the t-statistic for :
b
t
(b )
T 2q
,
b2
b2
(4.59)
b 1 and
b2 is its south-east entry.
where
b2 is the north-west entry of
F
Solution of E 153
From (4.57), (1.106, AM 2005) and (1.109, AM 2005) it follows:
(T 2)
T
b2
T 2 2
2T 2 .
(4.60)
T
(b
) N(0, 1) ,
b2 2
(4.61)
b and
and similarly for . Furthermore, from E 155 we derive that (b
, )
b2 are independent. Using
(4.100), we obtain:
b
t
q
(b
)
T 2p
=
b2
b2
2 2 (b
T
b2
T 2 2
(4.62)
(4.63)
115
(4.64)
for an arbitrary value 0 in (4.48), typically 0 0. How can you asses if the hypothesis (4.64) is
acceptable?
Solution of E 154
First, compute the distribution of (4.58) under (4.64). Then compute the realization e
t0 of (4.58). In the
notation (1.87, AM 2005) we obtain:
St
t0 e
t0 = F,0,1
(e
t0 )
P b
St
P b
t0 e
t0 = 1 F,0,1 (e
t0 ) .
(4.65)
(4.66)
Therefore, if t is so small or so large that either probabilities are too small, then (4.64) is very unlikely.
E 155 Independence of the sample mean and the sample covariance (www.4.3) *
Assume that Xt N(, ). Prove that the sample mean (4.100, AM 2005) and sample covariance
(4.101, AM 2005) are independent of each other.
Solution of E 155
Consider the following variables:
T
1X
b
Xt
T t=1
b
U1 X 1
..
.
b.
UT X T
(4.67)
b } reads:
The joint characteristic function of {U1 , . . . , UT ,
U1 ,...,UT ,b ( 1 , . . . , T , )
n PT
o
0
0
= E ei( t=1 t Ut + b )
n PT
o
PT
0
0 1 PT
1
= E ei[ t=1 t (Xt T s=1 Xs )+ ( T t=1 Xt )]
n PT
o
PT
0
1
= E ei( t=1 (t + T T s=1 s ) Xt ) .
(4.68)
From the independence of the invariants we can factor the characteristic function as follows:
T
Y
t=1
n
1
E ei(t + T T
PT
s=1 s ) Xt
T
Y
t=1
Xt
T
1X
t
s +
T s=1
T
!
.
(4.69)
Xt () = ei 2 .
(4.70)
Therefore:
= ei
PT
t=1
0 ( t T1
PT
t=1
PT
s=1
12 ( t T1
s + T
)
PT
s=1
s + T
) (t T1
PT
s=1
s + T
)
(4.71)
.
1X
= 0
s +
t
T
T
s=1
t=1
!0
T
T
X
1X
t
s = 0 .
T
T
t=1
s=1
T
X
(4.72)
(4.73)
Therefore the joint characteristic function factors into the following product:
U1 ,...,UT ,b ( 1 , . . . , T , ) = ( 1 , . . . , T ) ( ) ,
(4.74)
where
( 1 , . . . , T ) e
21 ( t T1
( ) ei
1
2T
PT
s=1
s ) ( t T1
PT
s=1
s )
(4.75)
(4.76)
Ut U0t ,
T t=1
b.
is independent of
(4.77)
b N ,
.
T
(4.78)
117
Solution of E 156
Recall that the characteristic function of the multivariate random normal variable Xt is given by:
0
Xt () E{eiXt } = ei 2 .
b
Now, using the definition of the sample mean
1 PT
E{ei T
t=1
Xt
} = E{ei
Xt
t=1 T
PT
PT
t=1
(4.79)
Xt , we have:
} = eiT
1
0
0
T 2 T T2
= ei 2
(4.80)
where the second equality follows from independence. This expression is the characteristic function of a
random normal variable with mean and covariance /T .
(4.81)
T
1X
b
Xt .
T t=1
(4.82)
X1
..
. N
XT
.. ,
. 0
..
0
..
.
(4.83)
(4.84)
b N ,
.
T
(4.85)
Regarding P {b
>
e} we have:
(
P {b
>
e} = 1 P {b
e} = 1 P
e
p
p
2 /T
2 /T
)
.
(4.86)
b
p
N(0, 1) ,
2 /T
(4.87)
therefore:
P {b
>
e} = 1
e
p
2 /T
!
,
(4.88)
(4.89)
2 02 .
(4.90)
The p-value of
b for
e under the hypothesis (4.90) is the probability of observing a value as extreme as
the observed value:
p P {b
e} .
(4.91)
Compute the expression of the p-value in terms of the cdf of the estimator.
Solution of E 158
From (4.102, AM 2005):
2
b N ,
.
T
(4.92)
e} = FN0 ,2 /T (e
)
(4.93)
or
p P {b
e} = 1 P {b
e} = 1 FN0 ,2 /T (e
) .
0
(4.94)
119
(4.95)
b 0
b
.
t0 p
b2 /(T 1)
(4.96)
(4.97)
for an arbitrary value 0 in (4.95). How can you asses if the hypothesis (4.97) is acceptable?
Hint. Recall that if Y2 and Z are independent and such that
Y2 N(0, 2 )
(4.98)
Z2
(4.99)
then:
Y 2
X,2 p St(, 0, 2 ) .
Z2
(4.100)
Solution of E 159
From (4.103, AM 2005), (1.106, AM 2005) and (1.109, AM 2005) it follows:
(T 1)
T
b2
T 1 2
2T 1 .
(4.101)
T
(b
) N(0, 1) .
2
(4.102)
b
b
t p
=
2
b /(T 1)
Therefore:
1
T
(b
) q
2
T
b2
(T 1) 2
=q
Y1
ZT2 1
(4.103)
b
t St(T 1, 0, 1) .
(4.104)
Finally, to test = 0 , first, compute the distribution of (4.96) under (4.97). Then compute the realization
e
t0 of (4.96). In the notation (1.87, AM 2005) we obtain:
P b
t0 e
t0 = FTSt1,0,1 (e
t0 )
St
P b
t0 e
t0 = 1 FT 1,0,1 (e
t0 ) .
(4.105)
(4.106)
Therefore, if t is so small or so large that either probabilities are too small, then (4.97) is very unlikely.
(4.107)
Solution of E 160
First of all we notice that the estimator of the covariance of Xt is the same as the estimator of the
covariance of Yt Xt + b for any b. Indeed defining:
T
1X
Yt ,
T t=1
(4.108)
b
where
1
T
PT
t=1
(4.109)
Xt and thus:
T
X
bY 1
b )(Yt
b )0
(Yt
T t=1
T
1X
(Xt + b (b
+ b))(Xt + b (b
+ b))0
T t=1
1
T
T
X
(4.110)
b )(Xt
b )0
(Xt
t=1
bX .
b + T
b
b0 =
W T
T
X
t=1
Xt X0t ,
(4.111)
121
where the last equality follows from substitution of the definitions (4.67) and (4.77). From the indepenb and
b (see E 155) the characteristic function of W must be the product of the characteristic
dence of
b and the characteristic function of T
b
b 0 . Therefore:
function of T
T
b () =
W ()
.
T b b 0 ()
(4.112)
On the one hand from the normal hypothesis N(0, ) and (2.223, AM 2005) we obtain that W is Wishart
distributed:
W W(T, ) ,
(4.113)
W () =
1
T /2
|I 2i|
(4.114)
b
b 0 reads:
On the other hand, the characteristic function of T
n
o
0
T b b 0 () E ei tr([T b b ])
n
o
0
= E ei tr(b b [T ])
= b b 0 (T ) =
(4.115)
1
|I 2i|
1/2
T
b () =
1
(T 1)/2
|I 2i|
(4.116)
(4.117)
b N 0,
,
T
(4.118)
(b
)(b
)0 W 1,
.
T
(4.119)
(4.120)
1
=
T
1
tr( ) + 1
T
2
[tr()]
.
(4.121)
Solution of E 162
From (4.103, AM 2005) we have:
n o
n
o
b = 1 E T
b = T 1 = 1 .
E
T
T
T
(4.122)
(4.123)
=E
m,n
mn
nm
i2
X h
b
=
E
mn
m,n
n
o
b mn mn )2
E (
Xh
m,n
b mn , mn )
Err2, (
m,n
m,n
i
b mn , mn ) + Inef2 (
b mn ) ,
Bias2, (
,
(4.124)
123
X 1
T 1
T 1 2
2
mm nn
mn
T 2 mn
T2
T2
m,n
1X 2
T 1X
mn +
mm nn
T m,n
T 2 m,n
1
1
2
=
tr(2 ) + 1
[tr()]
.
T
T
=
(4.125)
(4.126)
Show that estimator of the factor loading (4.126, AM 2005) and covariance matrix (4.127, AM 2005)
have the following distributions:
b 1
b N B, ,
B
F
T
b W(T K, ) ,
T
(4.127)
(4.128)
b and
b are independent to each other.
and that B
Note. In the spirit of explicit factors models, the dependent variables Xt are random variables, whereas
the factors ft are considered observed numbers. In other words, we derive all the distributions conditioned
on knowledge of the factors.
Solution of E 163
First of all, a comment on the notation to follow: we will denote here 1 , 2 , 3 , 4 simple normalization
constants. We derive here the joint distribution of the sample factor loadings:
b
b XF
b 1 ,
B
F
(4.129)
where:
T
1X
b
XF
xt ft0 ,
T t=1
T
1X 0
b
F
ft f ,
T t=1 t
(4.130)
(xt Bf
T t=1
(4.131)
Notice that the invariants Xt are random variables, whereas the factors ft are not. From the normal
hypothesis (4.126) the joint pdf of the time series IT {x1 , . . . , xT |f1 , . . . , fT } in terms of the factor
loadings B and the dispersion parameter 1 reads:
T
f (iT ) = 1 || 2 e 2
0
t=1 (xt Bft ) (xt Bft )
PT
= 1 || 2 e 2 tr{
PT
}.
(4.132)
(4.133)
where:
T
X
(xt Bft )(xt Bft )0
t=1
T h
ih
i0
X
b t ) + (Bf
b t Bft ) (xt Bf
b t ) + (Bf
b t Bft )
(xt Bf
t=1
T
T
X
X
0
b
b
b
(xt Bft )(xt Bft ) + (B B)
ft ft
=
t=1
!
b B)0
(B
(4.134)
t=1
T
T
X
X
b t )(Bf
b t Bft )0 +
b t Bft )(xt Bf
b t )0
+
(xt Bf
(Bf
t=1
t=1
b + (B
b B)T
b F (B
b B)0 + 0 + 0 .
= T
In this expression the last terms vanish since:
T
T
T
X
X
X
b t )(Bf
b t Bft )0 =
b0 +
b t f 0 B0
Bf
(xt Bf
xt ft0 B
t
t=1
t=1
t=1
T
X
xt ft0 B0
t=1
T
X
b tf 0B
b0
Bf
t
t=1
b XF B
b 0 + BT
b
b F B0 T
b XF B0 T B
b
bFB
b0
= T
(4.135)
b XF
b 1
b0 + T
b XF B0
= T
F
XF
b XF B0 T
b XF
b 1
b0
T
F
XF
= 0.
Substituting (4.134) in the curly brackets (4.133) in (4.132) we obtain:
T
(4.136)
125
b T )f
b (B,
b T )
b ,
f (iT ) = f (iT |B,
(4.137)
1
T KN
2
b T )
b 2
b
f (iT |B,
,
(4.138)
b T )
b = f (B)f
b (T )
b ,
f (B,
(4.139)
N2
K
0
b
b F (BB)
b
}
b 3 |T | 2
b F e 12 tr{(T )(BB)
f (B)
(4.140)
where:
b T )
b factors as follows:
and f (B,
where:
and:
b 4 ||
f (T )
T K
2
1
T KN
1
2
b
b
e 2 tr(T ) .
T
(4.141)
Expression (4.140) is of the form (2.182, AM 2005). Therefore the OLS factor loadings have a matrixvalued normal distribution:
b N B, (T )1 ,
b 1 .
B
F
(4.142)
b 1
b
B N B, , F
.
T
(4.143)
Also, expression (4.141) is the pdf (2.224, AM 2005) of a Wishart distribution, and thus:
b W(T K, ) .
T
b and thus ,
b is independent of B.
b
Finally, from the factorization (4.139) we see that T ,
(4.144)
(4.145)
where IN is the N -dimensional identity matrix. Consider a smooth function g of N variables. Prove
Steins lemma in this context, that is:
E {g(X)(Xn n )} = E
g(X)
xn
.
(4.146)
Hint. Use:
Z
G(x)
g(x1 , . . . , x, . . . , xN )
(4.147)
RN 1
(2)
N 1
2
e 2
2
k6=n (xk k )
Solution of E 164
From the definition of expected value for a normal distribution we have:
Z
E {g(X)(Xn n )} =
g(x1 , . . . , xn , . . . , xN )(xn n )
RN
N
(2) 2 e 2 k (xk k ) dx
(xn n )2
Z +
2
e
dxn ,
=
(xn n )G(xn )
2
(4.148)
g(x1 , . . . , x, . . . , xN )
(4.149)
RN 1
(2)
N 1
2
e 2
2
k6=n (xk k )
Notice that:
Z
dG(x)
RN 1
P
2
N 1
1
g(x)
(2) 2 e 2 k6=n (xk k ) dx1 dxn1 dxn+1 dxN .
xn
(4.150)
v e
(4.151)
,
(4.152)
we get:
1
E {g(X)(Xn n )} =
2
1
udv =
2
uv|
vdu .
(4.153)
The first term vanishes. Replacing (4.151) and (4.152) in the second term and using (4.150) we obtain:
127
Z +
(xn n )2
1
2
E {g(X)(Xn n )} =
e
2
Z
P
2
N 1
1
g(x)
1
a
(b
b)0 (b
b)
b+
a
b,
(b
b)0 (b
b)
(4.155)
,
Xt N ,
T t
T
(4.156)
where b is any constant vector and where a is any scalar such that:
0<a<
2
(tr() 21 ) ,
T
(4.157)
where 1 is the largest eigenvalue of the matrix . From the definition (4.134, AM 2005) of error, we
have:
2
0
[Err(, )] = E [ ] [ ]
(
0
)
a(b
b)
a(b
b)
b
b
=E
(b
b)0 (b
b)
(b
b)0 (b
b)
(b
b)0 (b
)
1
2
2a
E
.
= [Err(b
, )] + a2 E
(b
b)0 (b
b)
(b
b)0 (b
b)
(4.158)
We proceed now to simplify the expression of the last expectation in (4.158). Consider the principal
component decomposition (A.70, AM 2005) of the matrix in (4.156):
EE0 ,
(4.159)
b N(, I) ,
T 2 E0
(4.160)
together with:
T 2 E0 ,
T 2 E0 b .
(4.161)
Then the term in curly brackets in the last expectation in (4.158) reads:
h
i0 h
i
12 0
12 0
T
E
(b
b)
T
E
(b
(b
b) (b
)
= h
i0 h
i
1
1
(b
b)0 (b
b)
T 2 E0 (b
b) T 2 E0 (b
b)
0
=
=
(Y c)0 (Y )
(Y c)0 (Y c)
N
X
(4.162)
gj (Y)(Yj j ) ,
j=1
where:
gj (y)
(yj cj )j
.
(y c)0 (y c)
(4.163)
(y
j
c)0 (y
c)
22j (yj cj )2
(4.164)
Therefore, using Steins lemma (see E 164), we obtain for the last expectation in (4.158):
E
(b
b)0 (b
)
(b
b)0 (b
b)
=
N
X
E {gj (Y)(Yj j )}
j=1
N
X
gj (Y)
yj
j=1
N
X
j=1
(
=E
(4.165)
tr()
(Y c)0 (Y c)
2
2
0
(Y c) (Y c)
[(Y c)0 (Y c)]
)
.
129
T 2 E0 (b
b) ,
(4.166)
(b
b)0 (b
)
(b
b)0 (b
b)
b)
tr()
1
1 (b
b)E 2 2 E0 (b
=E
2
0
T (b
b) (b
b) T
[(b
b)0 (b
b)]
2
N
2 (b
b)(b
b)
=E
,
(b
b)0 (b
b)
T
T (b
b)0 (b
b)
)
(4.167)
where:
tr()
N
(4.168)
is the average of the eigenvalues. From the relation (A.68, AM 2005) on the largest eigenvalue 1 of
we obtain:
(b
b)(b
b)
1 .
0
(b
b) (b
b)
(4.169)
Therefore, substituting (4.167) in (4.158), using (4.169) and recalling (4.157) we obtain the following
relation for the error:
2
N
4 (b
b)(b
b)
a2 +
T
T (b
b)0 (b
b)
)
(
a(a T2 (N 21 ))
2
[Err(b
, )] + E
(b
b)0 (b
b)
2
[Err(, )] = [Err(b
, )] + E
a
(b
b)0 (b
b)
(4.170)
[Err(b
, )] .
In particular, the lowest upper bound is reached at:
a
2
(N 21 ) .
T
(4.171)
(4.172)
R
Write a MATLAB
script in which you generate a time series of T 30 observations from (4.172) and
compute the shrinkage estimator of location (4.138, AM 2005).
Solution of E 166
R
See the MATLAB
script S_ShrinkageEstimators.
(4.173)
T T N
1
IT 1T 10T
T
XT N ,
(4.174)
where XT N is the matrix of past observations, I is the identity matrix and 1 is a vector of ones. From
this expression and the property (A.22, AM 2005) of the rank operator we obtain:
1
1
0
0
b
rank() min rank IT 1T 1T , rank(XT N ) rank IT 1T 1T = T 1 < T .
T
T
(4.175)
b < N , and therefore N T eigenvalues are null.
If T N , we have that rank()
Note. You do not need to superimpose the true spectrum as in the figure.
Hint. Determine a grid of values for the number of observations T in the time series. For each value of T :
a) Generate an i.i.d. time series iT {x1 , . . . , xT } from X N(, );
b
b) Compute the sample covariance ;
b and store the sample eigenvalues (i.e. the sample spectrum);
c) Perform the PC decomposition of
d) Perform a)-c) a large enough number of times ( 100 times);
e) Compute the average sample spectrum.
Solution of E 168
R
See the MATLAB
script S_EigenvalueDispersion.
S
b
N
b
N ()
S >
.
b
b
1 ()
1
131
(4.176)
First we notice that the highest eigenvalue of the shrinkage estimator satisfies:
S
b
b .
1
< 1 ()
(4.177)
To show this, we first prove that for arbitrary positive symmetric matrices A and B and positive number
and we have:
1 (A + B) 1 (A) + 1 (B) .
(4.178)
This is true because from (A.68, AM 2005) the largest eigenvalue of a matrix A satisfies:
1 (A + B) = max
v0 (A + B)v
0
v v=1
max
v0 Av + max
v0 Bv
0
0
v v=1
v v=1
(4.179)
= 1 (A) + 1 (B) .
Therefore, from (4.160, AM 2005):
S
b
b + C
b
1
1 (1 )
b + 1 (C)
b
(1 )1 ()
h
i
b 1 ()
b 1 (C)
b < 1 ()
b ,
= 1 ()
(4.180)
(4.181)
(4.182)
which follows from the above argument and the reverse identities (A.69, AM 2005) and
N
X
b < N (C)
b 1
b .
N ()
n ()
N n=1
(4.183)
Solution of E 170
R
See the MATLAB
script S_ShrinkageEstimators.
e [h])h(x)dx ,
(x,
(4.184)
RN
where h is a generic function. We consider the function h (1 )fX + (y) . Deriving in zero (4.184)
with respect to we obtain:
Z
h
i
d
e [h ]) (1 )fX (x) + (y) (x) dx
(x,
0=
d =0 RN
Z
Z
h
i
e [h ]
(x, )
d
e [fX ]) fX (x) + (y) (x) dx
=
f
(x)dx
+
(x,
X
(4.185)
[f
d
e X]
RN
RN
=0
Z
e [h ]
(x, )
d
e [fX ]) .
=
f
(x)dx
+ (y,
[f
d
e X]
RN
=0
On the other hand, from the definition (4.185, AM 2005) of the influence function we have:
e [h ]
1 e
d
b
e
IF(y, fX , ) lim
[h ] [h0 ] =
0
d
(4.186)
=0
Therefore:
"Z
b =
IF(y, fX , )
RN
#1
(x, )
e [fX ]) .
f (x)dx
(y,
[f
e X]
(4.187)
(4.188)
133
Therefore from its definition (4.185, AM 2005) the influence function reads:
i
1 h
G (1 )fX + (y) G [fX ] ,
0
b lim
IF(y, fX , G)
(4.189)
where y is an arbitrary point. Now consider the function h (1 )fX + (y) . The influence function
can be written:
1
dG [h ]
.
(G [h ] G [h0 ]) =
0
d =0
b lim
IF(y, fX , G)
(4.190)
b , which reads:
Consider the functional associated with the sample mean
Z
e [h]
xh(x)dx .
(4.191)
RN
(4.192)
First we compute:
Z
e [h ]
xh (x)dx
N
ZR
x (1 )fX (x) + (y) (x) dx
RN
Z
= (1 )
xfX (x)dx + y
(4.193)
RN
= E {X} + ( E {X} + y) .
From this and (4.192) we derive:
b ) = E {X} + y .
IF(y, fX ,
(4.194)
(4.195)
i
1 h
G (1 )fX + (y) G [fX ] ,
0
b lim
IF(y, fX , G)
(4.196)
(4.197)
1
dG [h ]
b
.
IF(y, fX , G) lim (G [h ] G [h0 ]) =
0
d =0
(4.198)
e [h]
xh(x)dx .
(4.199)
(4.200)
First we compute:
Z
e [h ]
xh (x)dx
ZR
=
x (1 )f (x) + (y) (x) dx
R
Z
= (1 ) xf (x)dx + y
(4.201)
= E {X} + ( E {X} + y) .
From this and (4.200) we derive:
IF(y, f,
b) = E {X} + y .
(4.202)
(4.203)
135
Therefore from its definition (4.185, AM 2005) the influence function reads:
i
h
b lim 1 G (1 )fX + (y) G [fX ] ,
IF(y, fX , G)
0
(4.204)
where y is an arbitrary point. Now consider the function h (1 )fX + (y) . The influence function
can be written:
1
dG [h ]
b
IF(y, fX , G) lim (G [h ] G [h0 ]) =
.
0
d =0
(4.205)
b which reads:
Consider now the functional associated with the sample covariance ,
Z
e [h]
RN
e [h])(x
e [h])0 h(x)dx .
(x
(4.206)
1
e [h ]
e [h0 ]) =
b lim (
IF(y, fX , )
0
d
(4.207)
=0
First we compute:
Z
e [h ]
RN
e [h ])(x
e [h ])0 h (x)dx
(x
e [h ])(x
e [h ])0 (1 )fX (x) + (y) (x) dx
(x
RN
Z
e [h ])(x
e [h ])0 fX (x)dx + (y
e [h ])(y
e [h ])0 .
= (1 )
(x
(4.208)
RN
(4.209)
Z
b = Cov {X}
IF(y, fX , )
2
RN
de
[h ]
(x E {X})fX (x)dx
d =0
(4.210)
+ (y E {X})(y E {X}) .
Now using (4.194) we obtain:
Z
b = Cov {X} 2
IF(y, fX , )
RN
+ (y E {X})(y E {X})
(4.211)
(x E {X})0 fX (x)dx
Z
E {X}
(x E {X})0 fX (x)dx
RN
RN
(4.212)
RN
=0
Therefore:
b = Cov {X} + (y E {X})(y E {X})0 .
IF(y, fX , )
(4.213)
e [h]
(x
e [h])2 h(x)dx .
(4.214)
(4.215)
First we compute:
Z
e [h ]
(x
e [h ])2 h (x)dx
(x
e [h ])2 ((1 )fX (x) + (y) (x))dx
R
Z
= (1 ) (x
e [h ])2 fX (x)dx + (y
e [h ])2 .
=
(4.216)
137
(4.217)
Using
e [h0 ] = E {X} this means:
IF(y, fX ,
b2 ) = Var {X}
Z
2
R
2
de
[h ]
(x E {X})fX (x)dx
d =0
+ (y E {X}) .
(4.218)
(4.219)
IF(y, fX ,
b ) = Var {X} 2
(4.220)
+ (y E {X})2 .
The term in the middle is null. Therefore:
IF(y, f,
b2 ) = Var {X} + (y E {X})2 .
(4.221)
xf 0 fZ (z)dz
Z
ff 0 fZ (z)dz
1
.
(4.222)
Consider a point w (e
x, e
f ). We have:
h
i
G fZ + ( (w) fZ ) = (A + B)(C + D)1 ,
where:
(4.223)
Z
A
Z
B
Z
C
Z
D
xf 0 fZ dz = E {XF0 }
(4.224)
ee
xf 0 ( (w) fZ )dz = x
f 0 E {XF0 }
(4.225)
ff 0 fZ dz = E {FF0 }
(4.226)
ff 0 ( (w) fZ )dz = e
fe
f 0 E {FF0 } .
(4.227)
Since:
(C + D)1 = (C(I + C1 D))1
= (I + C1 D)1 C1
(I C
D)C
(4.228)
we have:
h
i
G fZ + ( (w) fZ ) (A + B)(C1 C1 DC1 )
AC1 + (BC1 AC1 DC1 ) .
(4.229)
Therefore:
h
i
b = lim 1 G fZ + ( (w) fZ ) G [fZ ]
IF(y, fX , B)
0
= (BC1 AC1 DC1 )
h
i
1
ee
= x
f 0 E {XF0 } E {FF0 }
h
i
1 ee0
1
E {XF0 } E {FF0 }
f f E {FF0 } E {FF0 }
(4.230)
1
= (e
xe
f 0 Be
fe
f 0 ) E {FF0 } .
(4.231)
n
o
(u)
t,n E Xt,n |xt,obs(t) , (u) , (u) .
139
(4.232)
t,obs(t) = xt,obs(t)
(4.233)
(u)
(u)
(u)
(u)
(4.234)
St
n
o
E Xt X0t |xt,obs(t) , (u) , (u)
(4.235)
(4.236)
n
o
(u)
St,mis(t),obs(t) E Xt,mis(t) X0t,obs(t) |xt,obs(t) , (u) , (u)
n
o
= E Xt,mis(t) |xt,obs(t) , (u) , (u) x0t,obs(t)
h
ih
i0
(u)
(u)
(u)
= t,mis(t) x0t,obs(t) = t,mis(t) t,obs(t) ,
(4.237)
and:
n
o
(u)
St,mis(t),mis(t) E Xt,mis(t) X0t,mis(t) |xt,obs(t) , (u) , (u)
n
o n
o0
= E Xt,mis(t) |xt,obs(t) , (u) , (u) E Xt,mis(t) |xt,obs(t) , (u) , (u)
n
o
+ Cov Xt,mis(t) |xt,obs(t) , (u) , (u)
h
ih
i0
(u)
(u)
(u)
(u)
(u)
(u)
= t,mis(t) t,mis(t) + mis(t),mis(t) mis(t),obs(t) (obs(t),obs(t) )1 obs(t),mis(t) .
(4.238)
In other words, defining the matrix C as:
(u)
Ct,obs(t),mis(t) 0 ,
and otherwise:
(u)
Ct,obs(t),obs(t) 0 ,
(4.239)
(u)
(u)
(u)
(u)
(u)
(4.240)
we can write:
(u)
St
h
ih
i0
(u)
(u)
(u)
= t
t
+ Ct .
(4.241)
Now we can update the estimate of the unconditional first moment as the sample mean of the conditional
first moments:
(u+1)
T
1 X (u)
x .
T t=1 t
(4.242)
Similarly we can update the estimate of the unconditional second moment as the sample mean of the
conditional second moments:
S(u+1)
T
1 X (u)
S .
T t=1 t
(4.243)
(4.244)
(u+1)
T
i
1 X h (u)
(u)
(u)
Ct + (xt (u) )(xt (u) )0 .
T t=1
(4.245)
Xt N(, ) ,
(4.246)
T
i
1 X h (u)
(u)
(u)
Ct + (xt (u) )(xt (u) )0 .
T t=1
(4.247)
141
(4.248)
(4.249)
2
LogN(Z , Z
).
(4.250)
V Y + (1 )Z ,
(4.251)
Y N(Y , Y2 )
(4.252)
fZ
Consider the variable:
2
LogN(Z , Z
).
(4.253)
Determine if (4.248) is the pdf of (4.251)? If not, how do you compute the pdf of (4.251)?
Solution of E 179
Formula (4.248) is not the pdf of (4.251). You can see this in simulation. Alternatively, you can prove it
by showing that the moments of X and the moments of V are different. For instance, denote:
s2Y E Y 2 ,
s2Z E Z 2 .
(4.254)
Then:
E X2
u2 fX (u)du
Z
Z
2
= u fY (u)du + (1 ) u2 fZ (u)du
(4.255)
= s2Y + (1 )s2Z .
On the other hand:
E V 2 E (Y + (1 )Z)2
= E 2 Y 2 + 2(1 )Y Z + (1 )2 Z 2
= 2 E Y 2 + 2(1 ) E {Y } E {Z} + (1 )2 E Z 2
2
= 2 s2Y + Y + eZ +Z /2 + (1 )2 s2Z .
(4.256)
Therefore (4.248) is not the pdf of (4.251). However, (4.248) is the pdf of a random variable, defined in
distribution as follows:
d
X BY + (1 B)Z ,
(4.257)
B Ber() ,
(4.258)
1
0
with probability
with probability 1 .
(4.259)
(4.260)
where (s) is the Dirac delta centered in s. When B = 1 in (4.257) the variable X will be normal as in
(4.252), when B = 0 the variable X will be lognormal as in (4.253). Therefore, the pdf of X conditioned
on B reads:
fX|B (x|B = 0) = fZ (x) ,
(4.261)
This two-step method gives rise to the pdf (4.248). To see this, as in (2.22, AM 2005) the pdf of X can
be written as the marginalization of the joint pdf of X and B:
Z
fX (x) =
(4.262)
As in (2.43, AM 2005) the joint pdf of X and B can be written as the product of the conditional and the
marginal:
fX,B (x, b) = fX|B (x|b)fB (b) .
(4.263)
Therefore:
Z
fX (x) =
h
i
fX|B (x|b) (1) (b) + (1 ) (0) (b) db
Z
Z
(1)
= fX|B (x|b) (b)db + (1 ) fX|B (x|b) (0) (b)db
=
(4.264)
143
As for the pdf of (4.251), it can be obtained as follows. First we use (1.13) with (1.67, AM 2005) and
(1.95, AM 2005) to compute the pdf of Y and (1 )Z:
1
(x/ Y )2
p
exp
2Y2
2Y2
1
(ln(x/(1 )) Z )2
f(1)Z (x) = p
exp
.
2
2
2Z
x 2Z
fY (x) =
(4.265)
(4.266)
Then we compute the characteristic functions of Y and (1 )Z as in (1.14, AM 2005) as the Fourier
transform of the respective pdfs:
(1)Z = F f(1)Z .
Y = F [fY ] ,
(4.267)
(4.268)
= Y ()(1)Z ()
= F [fY ] ()F f(1)Z () .
Using (B.45, AM 2005) we can express the characteristic function of V in terms of the convolution (B.43,
AM 2005) of the pdfs and the Fourier transform:
V () = F fY f(1)Z () .
(4.269)
Then we compute the pdf of V as in (1.15, AM 2005) as the inverse Fourier transform of the characteristic
function:
fV = F 1 [V ] .
(4.270)
(4.271)
(4.272)
Solution of E 180
Z
G[fX ] (x2 x)fX (x)dx
R
Z
Z
2
= (x x)fY (x)dx + (1 ) (x2 x)fZ (x)dx
R
(4.273)
(4.274)
(4.275)
T
X
bb 1
G
xt
T t=1
(4.276)
bc 5 .
G
(4.277)
R
script in which you evaluate the performance of the three estimators above with
Write a MATLAB
R
respect to (4.272) as in the MATLAB
script S_Estimator by assuming:
0.8,
Y 0.2 ,
Z 0,
Z 0.15 ,
(4.278)
Y 0.2 ,
Z 0,
Z 0.15 ,
(4.279)
R
b d with respect to (4.272) as in the MATLAB
and evaluate the performance of G
script S_EstimateExpectedValueEvaluation
by stress-testing the parameter Y in the range [0, 0.2].
Solution of E 182
The non-parametric estimator of:
Z
G[fX ]
R
(4.280)
145
x2 fiT (x)dx
xfiT (x)dx .
(4.281)
xfiT (x)dx =
R
T
1X
xt .
T t=1
(4.282)
(4.283)
(4.284)
T
1X
(xt m)
b 2.
T t=1
(4.285)
Therefore:
bd = n
G
cs m
b = sb2 + m
b2 m
b.
R
script S_EstimateMomentsComboEvaluation for the implementation.
See the MATLAB
(4.286)
Solution of E 183
R
See the MATLAB
script S_EstimateQuantileEvaluation.
Hint. Use the built-in cdfs that correspond to (4.249) and (4.250).
Y 0.2 ,
Z 0,
Z 0.15 .
(4.287)
Set Y 0.1 and generate a sample of T 52 i.i.d. observations from the distribution (4.248).
R
Hint. Feed a uniform sample into the MATLAB
function QuantileMixture.
Solution of E 184
R
See the MATLAB
function QuantileMixture and the script S_GenerateMixtureSample.
(4.288)
where I [] is the integration operator and p 0.5. Notice that the above is simply the quantile with
confidence p, see (1.8, AM 2005) and (1.17, AM 2005):
G[fX ] QX (p) .
(4.289)
Compute the non-parametric estimator qbp of (4.288) defined by (4.36, AM 2005) in Meucci (2005). Write
R
script in which you assume:
a MATLAB
0.8,
Y 0.2 ,
Z 0,
Z 0.15 .
and evaluate the performance of qbp with respect to (4.288) as in the script
the parameter Y in the range [0, 0.2].
S_Estimator
(4.290)
by stress-testing
R
function QuantileMixture.
Hint. Use the MATLAB
Solution of E 185
From (4.39, AM 2005) in Meucci (2005), the non-parametric estimator of the median is the sample
median (1.130, AM 2005):
b e x[T /2]:T .
G
R
See the MATLAB
script S_EstimateQuantileEvaluation for the implementation.
(4.291)
(
fX f
Ca
f,
2
LogN
f,(0.01)2
147
(4.292)
Hint. Approximate the continuum [0.04, 0.01] with a fine set of equally spaced points; evaluate the
(log-)likelihood for every value of .
Solution of E 186
R
See the MATLAB
script S_MaximumLikelihood.
(4.293)
Therefore, the ML estimator of the quantile is the functional applied to the ML-estimated distribution:
h
i
qbpM L qp fbM L .
(4.294)
On the other hand, as in (4.36, AM 2005) the non-parametric quantile is the functional applied to the
empirical pdf:
qbpN P qp [fiT ] .
Solution of E 187
R
See the MATLAB
script S_MaximumLikelihood.
(4.295)
Solution of E 188
First we have to compute the generator g that appears in the weighting function (4.79, AM 2005). Under
the Student t assumption the pdf is (2.188, AM 2005). Thus, as in (2.188, AM 2005) the generator reads:
g(z)
( +N
2 )
N
( 2 )() 2
1+
z
+N
2
(4.296)
g 0 (z)
+N
=
.
g(z)
+z
(4.297)
+N
.
b 1 (x
b )0
b)
+ (xt
(4.298)
R
For the implementation, see the MATLAB
function MleRecursionForStudentT and the script S_FitSwapToStudentT.
N
1 X (n )
h
,
N n=1
(4.299)
(4.300)
where is the Dirac delta (B.18, AM 2005). Notice that, since (4.299) is random, so is the function
(4.300). According to random matrix theory, in some topology the following limit for the random function
h holds
lim h = g ,
(4.301)
R
Write a MATLAB
script that shows (4.301) when the distribution fX is standard normal, shifted/rescaled
normal, and shifted/rescaled exponential.
g()
Hint. Choose a large N and simulate (4.299) once. This is a realization of (4.299). Compute the realized
eigenvalues and the respective realization of h defined in (4.300). Approximate h with a histogram. Show
that the histogram looks similar to g defined in (4.302).
149
1 0
X X.
T
(4.303)
N
1 X (n )
,
N n=1
(4.304)
where is the Dirac delta (B.18, AM 2005). Notice that, since (4.303) is random, so is the function
(4.304). According to random matrix theory, in some topology the following limit for the random function
h holds:
lim
N qT
h = gq ,
(4.305)
gq ()
1
2q
q
(q )( q ) ,
(4.306)
q)2 ,
q (1 +
q)2 .
(4.307)
R
Write a MATLAB
script that shows (4.306) when the distribution fX is standard normal, shifted/rescaled
normal, and shifted/rescaled exponential.
Solution of E 190
R
See the MATLAB
script S_PasturMarchenko.
Ft Ga(F , F2 ) ,
(4.308)
and the copula is the copula of the diagonal entries of Wishart distribution:
Wt W(W , W ) .
(4.309)
Consider the coefficients that define the regression line (3.127, AM 2005):
et + Ft .
X
(4.310)
E 192 Maximum likelihood vs. non-parametric estimators of regression parameters (see E 191)
b the maximum-likelihood estimators of the regression
Consider the same setup than in E 191. Are (b
, )
coefficients?
Solution of E 192
No, because the regression model ensuing from (4.308)-(4.309) is not conditionally normal as in (4.48)(4.49).
b
G
(b
)
T 2p
,
b2
b2
(4.311)
b
G
(b )
T 2q
;
b2
b2
(4.312)
Compare the empirical distribution of (4.311) with the analytical distribution of (4.58) as well as
the empirical distribution of (4.312) with the analytical distribution of (4.59) and comment.
Solution of E 193
R
See the MATLAB
script S_TStatApprox. The distribution of (4.311) is very similar to that of (4.58) even
for relatively small values of T . The same holds for the distribution of (4.312) as compared to that of
(4.59).
Chapter 5
Evaluating allocations
E 194 Gamma approximation of the investors objective (www.5.1) **
Determine the characteristic function of the approximate objective (5.25, AM 2005).
Solution of E 194
Consider the generic second-order approximation (3.108, AM 2005) for the N prices of the securities in
terms of the underlying K-dimensional market invariants X, which we report here:
(n)
PT + g (n) (0) + X0 x g (n)
1
2
+ X0 xx
g (n)
X,
2
x=0
x=0
(5.1)
where n = 1, . . . , N . From (5.11, AM 2005) the market is an invertible affine transformation of the
prices, i.e.:
(n)
an +
N
X
Bnm g
(m)
(0) +
m=1
N
X
Bnm X0 x g (m)
m=1
x=0
(5.2)
N
1 X
2
Bnm X0 xx
g (m)
X.
+
2 m=1
x=0
In turn, from (5.10, AM 2005) the objective is a linear combination of the market:
N
X
n M (n)
n=1
N
X
n an +
n=1
N
X
n=1
N
X
n=1
N
X
N
X
m=1
Bnm
X0 x g (m)
m=1
(5.3)
i
x=0
N
N
h
i
X
1X
2
n
Bnm X0 xx
g (m)
X ... .
2 n=1
x=0
m=1
151
In other words:
1
+ 0 X + X0 X ,
2
(5.4)
where:
N
X
n an +
n=1
N
X
N
X
n Bnm x g (m)
n,m=1
(5.5)
n,m=1
N
X
(5.6)
x=0
2
n Bnm xx
g (m)
x=0
n,m=1
(5.7)
Assume now that the K-dimensional invariants X are normally distributed as in (5.29, AM 2005). Then
we can compute explicitly the characteristic function of the approximate objective (5.4). Defining:
Z X N(0, ) ,
(5.8)
1
= + 0 ( + Z) + ( + Z)0 ( + Z)
2
1
1
0
0
= + + Z + 0 + 0 Z + Z0 Z
2
2
1 0
0
= b + w Z + Z Z ,
2
(5.9)
where:
1
b + 0 + 0
2
w + .
(5.10)
(5.11)
(5.12)
(5.13)
153
C BE ,
(5.14)
(5.15)
C0 w .
(5.16)
Finally we define:
In these terms and dropping the dependence on from the notation, (5.9) becomes:
1
= b + w0 CC1 Z + Z0 (C0 )1 C0 CC1 Z
2
1
0
0 0
= b + Y + YE B BEY
2
1
0
= b + Y + YE0 EE0 EY
2
K
X
1
=b+
(k Yk + k Yk2 ) .
2
(5.17)
k=1
() E e
h P
i
k 2
i b+ K
k=1 (k yk + 2 yk )
f (y)dy ,
(5.18)
RN
where f is the standard normal density (2.156, AM 2005), which factors into the product of the marginal
densities:
K
Y
f (y) =
2e 2 yk .
(5.19)
k=1
() = eib
K
Y
G(k , k ) ,
(5.20)
k=1
where:
1
G(, )
2
Since:
y
2
ei[y+ 2 y ] e 2 dy .
(5.21)
1 i 2
1 i
iy
y =
2
2
y2
i
1i
2
"
#2
1 i
i
=
y 1i
2
2 2
!2
1 i
i
+
2
2 1i
2
2
()2
i
1 i
y
,
=
2
1 i
2(1 i)
(5.22)
we obtain:
1
G(, ) =
2
r
1i
2
i
[y 1i
]
dy
()2
1
e 2(1i)
1 i
()
2(1i)
1
q
1i
2
i
[y 1i
] dy
2
1i
(5.23)
2 2
1
e 2(1i) .
1 i
Substituting this back into (5.20) we finally obtain the expression of the characteristic function:
() = qQ
K
eib
k=1 (1
21
2 2
k
k=1 (1ik )
PK
ik )
= |IK i|
12
(5.24)
12 0 (Ii)1 2
eib e
(5.25)
= |IK i| .
On the other hand, substituting (5.16) and (5.14) we obtain:
0 (IK i)1 = w0 C(IK i)1 C0 w
= w0 BE(IK i)1 E0 B0 w
0
0 1
= w B(E(IK i)E )
Substituting again (5.13) and (5.12) this reads:
B w.
(5.26)
155
(5.27)
= w0 (B0 iB0 )1 B0 w
= w0 (IK i)1 (B0 )1 B0 w
= w0 (IK i)1 w .
Therefore, substituting (5.25) and (5.27) in (5.24) we obtain the characteristic function of the approximate
objective:
21
() = |IK i|
eib e 2 w (IK i)
w 2
(5.28)
21
ei(+ + 2 )
e 2 (+) (IK i)
(+) 2
(5.29)
.
() = v 2 eu ,
(5.30)
1
u() ib w0 (I iV)1 w
2
v() |I iV| ,
(5.31)
where:
and:
w +
V .
(5.32)
To show how this works we explicitly compute the first three derivatives. It is easy to implement this
R
approach systematically up to any order with by programming a software package such as Mathematica
.
The first three derivatives of the characteristic function read:
1
1 3
0 () = v 2 v 0 eu + v 2 eu u0
2
3
1
1
3 5
1 3
00 () = v 2 (v 0 )2 eu v 2 v 00 eu v 2 v 0 eu u0 + v 2 eu (u0 )2 + v 2 eu u00
4
2
15 7 0 3 u 3 5 0 00 u 3 5 0 2 u 0
000
v 2 (v ) e + v 2 2v v e + v 2 (v ) e u
() =
8
4
4
3 5 0 00 u 1 3 000 u 1 3 00 u 0
+ v 2v v e v 2v e v 2v e u
4
2
2
3
3
3
3 5 0 2 u 0
+ v 2 (v ) e u v 2 v 00 eu u0 v 2 v 0 eu (u0 )2 v 2 v 0 eu u00
2
1
1
1 3
v 2 v 0 eu (u0 )2 + v 2 eu (u0 )3 + 2v 2 eu u0 u00
2
1
1
1 3
v 2 v 0 eu u00 + v 2 eu u0 u00 + v 2 eu u000 .
2
(5.33)
These expressions depend on the first three derivatives of u and v, which we obtain by applying the
following generic rules that apply for any conformable matrices M, A, B:
dM() 1
dM1 ()
= M1
M
d
d
d(AM()B)
d(M())
=A
B
d
d
d |A + B|
= |A + B| tr (A + B)1 B ,
d
(5.34)
(5.35)
(5.36)
where (5.34) follows from (A.126, AM 2005), (5.35) follows from a term by term expression of the
product AMB and (5.36) follows from (A.124, AM 2005). Using these formulas in (5.31) we obtain:
1
u() ib w0 (I iV)1 w
2
1
u0 () = ib w0 (I iV)1 [iV] (I iV)1 w
2
00
0
u () = w (I iV)1 [iV] (I iV)1 [iV] (I iV)1 w
u000 () = 3w0 (I iV)1 [iV] (I iV)1 [iV] (I iV)1 [iV] (I iV)1 w ,
(5.37)
and:
157
v() |I iV|
v()0 = |I + iV| tr (I + iV)1 (iV)
v 00 () = |I + iV| tr (I + iV)1 (iV) tr (I + iV)1 (iV)
+ |I + iV| tr (I iV)1 (iV)(I iV)1 (iV)
v 000 () = |I + iV| (tr (I + iV)1 (iV) )3
+ 3 |I + iV| tr (I + iV)1 (iV)
tr (I iV)1 (iV)(I iV)1 (iV)
+ |I + iV| tr (I iV)1 [iV] (I iV)1 (iV)(I iV)1 (iV)
+ (I iV)1 (iV)(I iV)1 [iV] (I iV)1 (iV) .
(5.38)
(5.39)
u000 = i3w0 V3 w ,
and:
v1
v 0 = i tr(V)
(5.40)
v 00 = [tr(V)] + tr(V2 )
3
1
1 3
E { } =
=i
v 2 v 0 eu + v 2 eu u0
2
0
1
1
1
0
2 w w
=e
(b + w Vw) tr(V) .
2
2
i1 0 (0)
(5.41)
Finally, the explicit dependence on allocation comes from substituting in the final expression (5.10),
(5.11) and (5.32).
E 196 Estimability and sensibility imply consistence with weak dominance (www.5.2)
Show that estimability and sensibility imply consistence with weak dominance.
Solution of E 196
Assume that weakly dominates . From the definition (5.36, AM 2005), this means that for all
u (0, 1) the following inequality holds:
Q (u) Q (u) .
(5.42)
(5.43)
From the definition of estimability (5.52, AM 2005), the index must be a function of the distribution of
the objective, as represented, say, by the cdf:
S() = G [F ] .
(5.44)
From (2.27, AM 2005) the random variable X defined below has the same distribution as the objective:
d
X Q (U ) = ,
U U([0, 1]) .
(5.45)
R.
(5.46)
(5.47)
(5.48)
On the other hand, from (5.42) in all scenarios X X , i.e. X strongly dominates X . Therefore,
from the sensibility of S we must have:
G [FX ] G FX .
(5.49)
(5.50)
159
Solution of E 197
Assume that an index of satisfaction is translation invariant:
g 1 S( + g) = S() + ,
(5.51)
for all 0 .
(5.52)
Then it displays the constancy feature. Indeed assume that B is a deterministic allocation B B then:
b
= b S
+
+
S( + b) = S b
b
b
b
b
= b S
+ = S b
+ b = S() + b .
b
b
(5.53)
(5.54)
In particular:
(5.55)
E {u()} =
u()f ()d =
u(Q (s))ds .
(5.56)
(5.57)
Then:
E {u( )} E {u( )} .
(5.58)
(5.59)
(5.60)
(5.61)
(5.62)
(5.63)
(5.64)
y f (y)dy =
1 y
y f
dy =
u1 (z) = z ,
(5.66)
161
Solution of E 200
From its definition:
CE() u1 (E {u( )}) ,
(5.68)
(5.69)
(5.70)
(5.71)
(5.72)
Z
ey f+ (y)dy = ey f (y )dy
Z
= e e f ()d = e E {u()} .
E {u( + )} =
(5.73)
On the other hand the inverse of the exponential utility function reads:
u1 (z) =
ln(z)
.
(5.74)
Therefore:
1
ln( [E {u( + )}])
1
= ln(e [ E {u()}])
1
= [ln([ E {u()}])]
= + u1 (E {u()}) .
u1 (E {u( + )}) =
(5.75)
E {f } = 0 E {u(b )} E {u(b+f )} .
(5.76)
(5.77)
On the other hand, Jensens inequality states that for any random variable the following is true if and
only if u is concave:
u(E {}) E {u()} .
(5.78)
(5.79)
Note. A similar solution links convexity of the utility function with risk propensity and linearity of the
utility function with risk neutrality.
E {f } = 0 ,
(5.80)
we obtain:
Var {f } 00
u (b ) .
2
(5.81)
On the other hand from the definition of risk premium (5.85, AM 2005), which we report here:
RP(b, f ) CE(b) CE(b + f ) .
and the constancy of the certainty-equivalent:
(5.82)
163
CE(b) = b ,
(5.83)
CE(b + f ) = b RP(b, f ) ,
(5.84)
we obtain:
(5.85)
RP(b, f )
u00 (b ) Var {f }
.
u0 (b )
2
(5.86)
(5.87)
Taking expectations and pivoting the expansion around the objectives expected value
e E {} ,
(5.88)
we obtain:
u00 (E {})
Var {} ,
2
(5.89)
where the term in the first derivative cancels out. On the other hand, another Taylor expansion yields:
u1 (z + ) u1 (z) +
1
.
u0 (u1 (z))
(5.90)
u00 (E {})
Var {}
2u0 (u1 (u(E {})))
u00
= E {} + 0 (E {}) Var {} ,
2u
(5.91)
CE() E { }
A(E { })
Var { } .
2
(5.92)
(5.93)
RN
= E { ( )u0 ( )} .
From this and the chain rule we obtain:
CE() = u1 (E {u( )})
du1
(E {u( )}) E {u( )}
dz
1
= 0 1
E {u( )}
u (u (E {u( )}))
E {u0 ( ) ( )}
=
.
u0 (CE())
(5.94)
For example consider the case where the objective are the net gains:
0 (PT + pT ) .
(5.95)
(5.96)
(5.97)
165
Solution of E 205
We have:
PT + N(, ) .
(5.98)
pT .
(5.99)
.
(5.100)
u0 () =
1
2 2
2
e
.
(5.101)
o
1
2 n ( 2
(0 M)2 )
E e
M
r
1 Z
0 0
0 1
1
1
2 || 2
me 2 m ( )m e 2 (m) (m) dm
=
N
(2) 2 RN
r
1 Z
1
2 || 2
=
me 2 D(m) dm ,
N
(2) 2 RN
E {u0 ( ) ( )} =
(5.102)
where:
D(m) m
m + (m )0 1 (m )
(5.103)
= (m )0 1 (m ) + 0 1 0 1 ,
with:
0
+ 1
1
1 ,
0
+ 1
1
.
(5.104)
E {u0 ( ) ( )}
u0 (CE())
Z
1
1
|| 2 CE()2
2
=
me 2 D(m) dm
N e
N
2
(2)
R
CE() =
12
||
(2)
N
2
N
2
CE()2 12 [ 0 1 0 1 ] (2)
e
21
||
m
RN
||
12
(2)
N
2
e 2 (m)
(m)
dm
= () ,
(5.105)
where from (5.104) we obtain:
12
()
||
12
e 2 CE() e 2 [
1
1 0 1 ]
||
12
||
CE()
=
e
12 e 2
1
1
0
+
h
i
1
21 0 (1 1 [ 1 0 +1 ] 1 )
(5.106)
.
1
0 + 1
1
1 ( pT ) .
(5.107)
Z
{u ( )0 ( )} =
u ( )0 ( )fM (m)dm
RN
Z
2
00
=
fM (m) u0 ( )
0 ( ) + u ( ) ( )0 ( )] dm
N
R
2
00
= E u0 ( )
.
0 ( ) + u ( ) ( )0 ( )
From this, (5.94) and the chain rule of calculus we obtain:
(5.108)
167
E {u0 ( )0 ( )}
CE() =
u0 (CE())
1
E {u0 ( )0 ( )}
= 0
u (CE())
1
E {u0 ( )0 ( )}
+ 0
u (CE())
#
"
u00 (CE()) E {u0 ( ) ( )}
E {u0 ( )0 ( )}
=
2
u0 (CE())
[u0 (CE())]
2
00
E u0 ( )
0 ( ) + u ( ) ( )0 ( )
.
+
u0 (CE())
(5.109)
(5.110)
we obtain:
0 CE() =
(5.111)
where:
wE
u0 (0 M)
M
.
u0 (CE())
(5.112)
The denominator in (5.111) is always positive. On the other hand, the numerator in (5.111) can take
on any sign, depending on the local curvature of the utility function. Therefore the convexity of the
certainty-equivalent is not determined.
2 , 22
P2 N
(5.113)
.
Consider the case where the objective is final wealth. Consider an exponential utility function:
(5.114)
u () a be ,
(5.115)
where b > 0. Compute analytically the certainty equivalent as a function of a generic allocation vector
(1 , 2 ). What is the effect of a and b?
Solution of E 208
Consider the utility function (5.115). As in (5.92, AM 2005) expected utility reads:
n o
i
= a b
,
E {u ( )} a b E e
(5.116)
where denotes the characteristic function (1.12, AM 2005) of the objective. The inverse of (5.115) is:
u1 (e
u) = ln
au
e
b
.
(5.117)
(5.118)
as in (5.94, AM 2005). The certainty equivalent is not affected by a and b. In other words, the certainty
equivalent is not affected by positive affine transformations of the utility function.
To compute the certainty equivalent as a function of the allocation vector we recall that lognormal and
normal copulas are the same, and we notice that normal marginals with a normal copula give rise to a
normal joint distribution:
P N (, ) ,
(5.119)
where:
1
2
,
12
1 2
1 2
22
.
(5.120)
1 0
.
2
(5.121)
169
(b )2
1
e 22 .
2
(5.122)
Fb ; () =
1
2
1
=
2
1
=
2
(xb )2
22
dx
2
!
e
y 2
(5.123)
dy
1 + erf
2
,
where we used the change of variable y (x b )/ 2. The regularized quantile of the objective is
the inverse of the regularized cdf:
Qb ; (s) F1
(s) = b +
b ;
2 erf 1 (2s 1) .
(5.124)
(5.125)
and the approximation becomes exact in the limit 0. Thus the quantile (5.159, AM 2005) satisfies:
b = b Qc (b) Qb (1 c) = b ,
which is the constancy property (5.62, AM 2005) in this context.
(5.126)
(5.127)
(5.128)
On the other hand, the s-quantile Qh(X) (s) of the variable h(X) is defined implicitly by:
P h(X) Qh(X) = s .
(5.129)
Since (5.128) and (5.129) hold for any s we obtain the general result for any increasing function h:
Qh(X) (s) = h(QX (s)) .
(5.130)
(5.131)
Expression (5.131) and the positive homogeneity of the objective (5.16, AM 2005) prove the positive
homogeneity of the quantile-based index of satisfaction:
Qc () Q (1 c) = Q (1 c) = Q (1 c) Qc () .
(5.132)
(5.133)
Expression (5.133) and the additivity of the objective (5.17, AM 2005) prove the translation-invariance
of the quantile-based index of satisfaction:
Qc ( + b) Q+b (1 c) = Q + (1 c) = Q (1 c) + Qc () + .
(5.134)
2
3
(5.135)
171
(5.136)
Expression (5.136) and the additivity of the objective (5.17, AM 2005) prove the additive co-monotonicity
of the quantile-based index of satisfaction:
Qc ( + ) Q+ (1 c) = Q + (1 c)
= Q (1 c) + Q (1 c)
(5.137)
Qc () + Qc () .
(5.138)
where CM3 is the third central moment. Using (1.48) to express the central moments in terms of the raw
moments, we obtain the approximate expression of the quantile of the objective:
Qc () Q (1 c) A() + B()z(1 c) + C()z 2 (1 c) ,
where:
(5.139)
A E { }
B
C
3
E 3 3 E 2 E { } + 2 E { }
2
6(E {2 } E { } )
E {2 } E { }
3
3
E 3 E 2 E { } + 2 E { }
2
6(E {2 } E { } )
(5.140)
(5.141)
(5.142)
Note. To obtain the explicit analytical expression of these coefficients as functions of the allocation we
use the derivatives of the characteristic function of the objective as discussed in E 194.
(5.143)
(5.144)
(5.145)
Defining:
Xn
j Mj ,
j6=n
we see that Q() is defined implicitly as follows in terms of the joint pdf f of (Xn , Mn ):
Z "Z
Q n mn
1 c = P {Xn + an Mn Q} =
(5.146)
Since in general:
g(a)
1
a0 a
dg
g(a)+ da
a
f (x)dx = lim
f (x)dx = f (g(a))
g(a)
dg(a)
,
da
(5.147)
Z
0=
f (Q n Mn , mn )
Q
mn dmn ,
n
(5.148)
173
or:
Q
=
n
m f (Q n Mn , mn )dmn
R n
= E {Mn |Xn = Q() n Mn } .
f (Q n Mn , mn )dmn
(5.149)
(5.150)
(5.151)
2
ij
Q() lim
0
1
[i Q() i Q()] .
(5.152)
(5.153)
Y Mj E {Mj |Z = 0} ,
X Mi ,
(5.154)
we can write:
i Q() E {X|Z + Y = 0} .
(5.155)
i Q() = E {X|Z = 0} .
(5.156)
Consider the joint pdf f of (X, Y, Z). The conditional expectation in (5.155) can be computed in terms
of the conditional pdf, which reads:
f (x, y, y)
.
f (x, y, y)dxdy
(5.157)
f (x, y, 0)dxdy
1 R R
f (x, y, 0)dxdy
Z Z
Z Z
xf (x, y, 0)dxdy
xyz [ln f (x, y, 0)] f (x, y, 0)dxdy
Z Z
Z Z
(5.158)
1
f (x, y, 0)dxdy
RR
yz [ln f (x, y, 0)] f (x, y, 0)dxdy
RR
1+
f (x, y, 0)dxdy
RR
RR
xf (x, y, 0)dxdy
xyz [ln f (x, y, 0)] f (x, y, 0)dxdy
RR
RR
f (x, y, 0)dxdy
f (x, y, 0)dxdy
RR
RR
xf (x, y, 0)dxdy
yz [ln f (x, y, 0)] f (x, y, 0)dxdy
RR
RR
+
.
f (x, y, 0)dxdy
f (x, y, 0)dxdy
Thus:
i Q() E {X|Z = 0} [E {XY z [ln f (X, Y, 0)] |Z = 0}
E {X|Z = 0} E {Y z [ln f (X, Y, 0)] |Z = 0}]
= E {X|Z = 0} [Cov {X, Y z [ln f (X, Y, 0)] |Z = 0}]
= E {X|Z = 0} [Cov {X, Y z [ln f (X, Y |0) + ln fZ (0)] |Z = 0}]
= E {X|Z = 0} [Cov {X, Y z [ln f (X, Y |0)] |Z = 0}
+ z [ln fZ (0)] Cov {X, Y |Z = 0}] .
On the other hand:
(5.159)
175
xf (x, y|z)dxdy
yf (x, y|z)dxdy
Z Z
=
xyz f (x, y|z)dxdy
E {Y |Z = z} z [E {X|Z = z}]
Z Z
E {X|Z = z}
yz f (x, y|z)dxdy
Z Z
=
xyz [ln f (x, y|z)] f (x, y|z)dxdy
(5.160)
E {Y |Z = z} z [E {X|Z = z}]
Z Z
E {X|Z = z}
yz [ln f (x, y|z)] f (x, y|z)dxdy
= E {XY z [ln f (X, Y |z)] |Z = z}
E {X|Z = z} E {Y z [ln f (X, Y |z)] |Z = z}
E {Y |Z = z} z [E {X|Z = z}]
= Cov {X, Y z [ln f (X, Y |z)] |Z = z}
E {Y |Z = z} z [E {X|Z = z}] ,
which shows that:
Cov {X, Y z [ln f (X, Y |z)] |Z = z} = z [Cov {X, Y |Z = z}]
+ E {Y |Z = z} z [E {X|Z = z}] .
(5.161)
(5.162)
(5.163)
Therefore:
i Q() E {X|Z = 0} [z [Cov {X, Y |z = 0}]
+ z [ln fZ (0)] Cov {X, Y |z = 0}] ,
and finally, from (5.156) we obtain:
(5.164)
2
ij
Q() z [Cov {X, Y |z = 0}]
(5.165)
(5.166)
z=Q()
ln f0 M (Q())
(5.167)
2
h, i h, i |h, i|
where:
h, i 0 .
The last row in (5.167) is true because of the Cauchy-Schwartz inequality (A.8, AM 2005).
(5.168)
(5.169)
Write the quantile index Qc () of the objective (5.10, AM 2005) as defined in (5.159, AM 2005) as a
function of the allocation.
177
Solution of E 220
We can represent (5.169) in the notation (2.268, AM 2005) as follows:
U
M El , , gN
,
(5.170)
U
where gN
is provided in (2.263, AM 2005). From (2.270, AM 2005) we obtain:
0 M El 0 , 0 , g1U .
(5.171)
Therefore:
d
= 0 +
0 X ,
(5.172)
where:
X El 0, 1, g1U ,
for some generator
g1U
(5.173)
(5.174)
0 c ,
(5.175)
(5.176)
Qc ()
c
= diag () +
diag () .
(5.177)
Qc ()
N
X
Cn ,
(5.178)
n=1
Qc ()
Qc + (n) Qc ()
(5.179)
where Qc (X) is calculated as in the previous point, (n) is the Kronecker delta (A.15, AM 2005),
and is a small number, as compared with the average size of the entries of ;
Display the result using the built-in plotting function bar;
Use the result above to compute Qc () in a different way, i.e. semi-analytically;
Use the previous results to compute the marginal contributions to Qc () from each security;
Display the result using the built-in plotting function bar.
Hint. You will have to compute the quantile of the standardized univariate generator, use the simulations
generated above.
Solution of E 222
To generate J scenarios from(5.181) you can use the following approach. Consider the uniform distribution on the N -dimensional hypercube:
X U ([1, 1] [1, 1]) .
(5.180)
The entries of X are independent and therefore (5.180) can easily be simulated. Now consider the uniform
distribution on the N -dimensional unit hypersphere:
Y U (E0,I ) .
(5.181)
To generate a sample of size J from (5.181) generate a sample of size Je from (5.180). Then use
d
Y = X/ kXk 1 .
(5.182)
To set the number of simulations Je use (A.78, AM 2005). To generate a sample of size J from (5.169)
apply (2.270, AM 2005) to the sample from (5.181). To generate a sample of size J from (5.181) more efficiently you can proceed as follows (courtesy Xiaoyu Wang, CIMS-NYU). In this function, we represent
(5.181) as in (2.259, AM 2005)-(2.260, AM 2005):
d
Y = RU .
(5.183)
In this expression U is uniform on the surface of the unit sphere and R is a suitable radial distribution
independent of U.
179
U = Z/ kZk ,
(5.184)
Z N (0N 1 , IN N ) ,
(5.185)
where:
this follows from the last expression in (2.260, AM 2005) and the fact that the normal distribution is
elliptical. To generate J scenarios of R, notice that, for a given radius r, the radial density must be
proportional to rN 1 . Indeed, the infinitesimal volume surrounding the surface of the sphere of radius r
is proportional to rN 1 . Therefore, pinning down the normalization constant, we obtain:
fR (r) =
rN 1
.
N 1
(5.186)
(5.187)
(5.188)
R = W 1/N ,
(5.189)
(5.190)
R
where 0.05 and 0.05. Write a MATLAB
script in which you plot the true quantile-based
index of satisfaction Qc () against the Cornish-Fisher approximation (5.179, AM 2005) as a function of
the confidence level c (0, 1).
Solution of E 223
R
See the MATLAB
script S_CornishFisher.
Solution of E 224
We consider weighted averages of the expected shortfall for different confidence levels. From the definition (5.207, AM 2005) of expected shortfall this means:
1
Z
Spc()
1
1c
ESc ()w(c)dc =
0
1c
Z
Q (s)ds w(c)dc ,
(5.191)
where:
1
Z
w(c) 0 ,
w(c)dc = 1 .
(5.192)
Equivalently:
Z
0
w(c)
Q (s)ds
ESc ()w(c)dc =
dc
c
0
0
Z
Z 1
c
w(c)
ds dc
=
Q (s)
c
0
0
Z 1
Z 1
w(c)
=
Q (s)
dc ds
c
0
s
Z 1
=
Q (s)(s)ds ,
Z
Z
(5.193)
where:
1
Z
(s)
s
w(x)
dx .
x
(5.194)
w(s)
,
s
(5.195)
[s(s)] ds =
0
(s)ds +
0
s0 ds =
Z
(s)ds
w(s)ds .
(5.196)
[s(s)] ds = s(s)|0 = 0 .
(5.197)
Therefore:
Z
(s)ds = 1 .
(5.198)
181
(5.199)
(5.200)
Solution of E 225
Thus from the definition of expected shortfall (5.208, AM 2005) for any confidence level ESc () 0.
Since the expected shortfall generates the spectral indices of satisfaction, the satisfaction derived from
any fair game is negative whenever satisfaction is measured with a spectral index.
(5.201)
where CM3 is the third central moment. Using (1.48) to express the central moments in terms of the raw
moments we obtain the approximate expression of the quantile of the objective:
Q (s) Q (s) A() + B()z(s) + C()z 2 (s) ,
(5.202)
where (A, B, C) are defined in (5.181, AM 2005). To obtain the spectral index of satisfaction we apply
(5.202) to its definition (5.223, AM 2005), obtaining:
Z
Spc ()
(s)Q (s)ds
0
Z
A() + B()
Z
(s)z(s)ds + C()
(5.203)
1
2
(s)z (s)ds .
0
Solution of E 227
Define the variable:
Z Qc () .
(5.204)
(5.205)
= P {Z z|Z 0} .
This is the cdf of Z conditioned on Z 0. If the confidence level c is high, from (5.184, AM 2005) this
cdf is approximated by G,v . Thus:
Z
E {Z|Z 0}
z
0
v
dG,v (z)
dz =
,
dz
1
(5.206)
where the last result can be found in Embrechts et al. (1997). On the other hand, from the definition
(5.208, AM 2005) of expected shortfall we derive:
ESc () = E { | Qc ()}
= Qc () + E { Qc ()| Qc ()}
(5.207)
= Qc () E {Z|Z 0} .
Solution of E 228
R
To estimate the parameters and v using the MATLAB
function gpfit proceed as follows. Define the
excess as the following random variable:
Z e | e .
(5.208)
(5.209)
183
where in the last row we used (5.182, AM 2005). From (5.183, AM 2005) we obtain:
FZ (z) G,v (z) .
The function xi_v
variable (5.208).
= gpfit(Excess)
(5.210)
attempts to fit (5.210), where Excess are the realizations of the random
(5.211)
R
2
where 7, 1,
4. Write a MATLAB
script in which you:
Plot the true quantile-based index of satisfaction Qc () for c [0.950, 0.999];
Generate Monte Carlo simulations from (5.211) and superimpose the plot of the sample counterpart
of Qc () for c [0.950, 0.999];
Consider the threshold:
e Q0.95 () ;
(5.212)
Superimpose the plot of the EVT fit (5.186, AM 2005) for c [0.950, 0.999].
Hint. Estimate the parameters and v using the built-in function xi_v
the realizations of the random variable:
= gpfit(Excess),
Z e | e .
(5.213)
(5.214)
1 Le (z) ,
where in the last row we used (5.182, AM 2005). From (5.184, AM 2005) we obtain:
FZ (z) G,v (z) ,
(5.215)
R
which is the expression that the MATLAB
function gpfit attempts to fit.
Solution of E 229
R
See the MATLAB
script S_ExtremeValueTheory.
j M j .
(5.216)
j6=n
(x + n m)IxQc n m (x, m)
=
n
n 1 c
fXn ,Mn (x, m)dxdm]
Z +
Z Qc n m
1
=
(x + n m)fXn ,Mn (x, m)dxdm .
1 c n
(5.217)
Qc ()
m Qc ()fXn ,Mn (Qc () n m, m)dm
n
Z + Z Qc n m
1
+
mfXn ,Mn (x, m)dxdm .
1 c
ESc ()
1
=
n
1c
(5.218)
0=
Z + Z Qc n m
(1 c)
=
fXn ,Mn (x, m)dxdm
n
n
Z +
Qc ()
(
m)fXn ,Mn (Qc () n m, m)dm .
n
(5.219)
Therefore:
Z + Z Qc n m
ESc ()
1
=
mfXn ,Mn (x, m)dxdm
n
1 c
Z Z
1
=
mfXn ,Mn (x, m)dxdm .
1c
Qc
(5.220)
Note. The result for a generic spectral measure follows from (5.191) and the definition (5.195) of the
weights in terms of the spectrum.
185
j M j .
(5.221)
j6=n
2 ESc ()
=
E {M0 |Xn + n Mn Qc ()}
n 0
n
Z
m0 fM (m|Xn + n Mn Qc ())dm .
=
n
(5.222)
R Qc n mn
fXn ,M (x, m)dx
xn +n mn Qc
R Qc n mn
=
(5.223)
m0
fXn ,M (x, m)dxdm
n
Z
1
Qc ()
=
m0
mn fXn ,m (Qc () n mn , m)dm .
1c
n
2 ESc ()
1
=
n 0
1c
"Z
(5.224)
(5.225)
1
2 ESc ()
=
0
n
1c
Z
Qc ()
mn
n
f (Qc ()) Qc ()
E {M0 | = Qc ()}
1c
n
f (Q ())
c
E {Mn M0 | = Qc ()} .
1c
(5.226)
1c
f (Q ())
c
E {MM0 |0 M = Qc ()}
1c
f (Q ())
= c
Cov {M|0 M = Qc ()} .
1c
(5.227)
(5.228)
Write the expected shortfall ESc () defined in (5.207, AM 2005) as a function of the allocation.
Solution of E 232
From (5.228) and (2.195, AM 2005) in we obtain:
0 M St (, 0 , 0 ) ,
(5.229)
or:
d
= 0 +
0 X ,
(5.230)
where:
X St (, 0, 1) .
(5.231)
Z
0
1c
1
Q (s) ds =
1c
Z
0
1c
i
h
0 + 0 QX (s) ds .
(5.232)
187
ESc () = 0 +
0 c ,
(5.233)
QX (s) ds .
(5.234)
where:
1
c
1c
1c
This scalar can be evaluated as the numerical integral of the quantile function of the standard univariate t
distribution.
(5.235)
C diag ()
ESc ()
c
= diag () +
diag () .
(5.236)
ESc ()
N
X
Cn ,
(5.237)
n=1
R
Hint. Use the built-in MATLAB
numerical integration function
tinv.
quad
Solution of E 234
R
See the MATLAB
script S_ESContributionsStudentT.
(5.238)
where B is a N K matrix with entries of the order of the unit, F is a K-dimensional vector, U is a
N -dimensional vector and:
ln F
ln (U + a)
St (, , ) ,
(5.239)
f
0
0
2 u
,
(5.240)
(5.241)
R
with 1 and arbitrary. Assume N 30, K 10 and 10. Write a MATLAB
script in which
you:
Generate randomly the parameters in and the allocation ;
Generate J 10,000 Monte Carlo scenarios from the market distribution (5.238);
Set c 0.95 and compute ESc () as the sample counterpart of (5.208, AM 2005);
Compute the K marginal contributions to ESc () from each factor and the one aggregate contribution from all the residuals, as the sample counterpart of (5.238, AM 2005) adapted to the factors;
Display the result in a subplot using the built-in plotting function bar.
Solution of E 235
R
See the MATLAB
script S_ESContributionsFactors.
(5.242)
189
P1 Ga(1 , 12 )
P2
(5.243)
LogN(2 , 22 ) .
(5.244)
Assume that the copula is lognormal, i.e. the grades (U1 , U2 ) of (P1 , P2 ) have the following joint distribution (not a typo, why?):
1 (U1 )
1 (U2 )
N
1
,
,
1
0
0
(5.245)
where denotes the cdf of the standard normal distribution. Assume that the current prices are p1
E {P1 } and p2 E {P2 }.
R
Write a MATLAB
script in which you:
Fix arbitrary values for the parameters (1 , 12 , 2 , 22 ) and compute the current prices;
Consider the following allocation 1 1, 2 2 and simulate the distribution of the objective of
an investor who is interested in final wealth;
Consider the previous allocation. Simulate the distribution of the objective of an investor who is
interested in the P&L;
Consider the previous allocation and the following benchmark 1 2, 2 1 and simulate the
distribution of the objective of an investor who is interested in beating the benchmark.
Solution of E 236
R
See the MATLAB
script S_InvestorsObjective.
,
(5.246)
where b > 0. Plot the utility function for different values of and 0 . Compute the Arrow-Pratt risk
aversion (5.121, AM 2005) implied by the utility (5.246).
Solution of E 237
Deriving (B.75, AM 2005), we obtain:
2
d
2
erf (x) = ex .
dx
(5.247)
Hence:
u0 () b
2
e
2
2
(5.248)
and:
u00 ()
2b 1
e
.
(5.249)
Therefore:
A ()
u00 ()
0
=
.
0
u ()
For the interpretation of this result see (5.124, AM 2005) and comments thereafter.
(5.250)
Chapter 6
Optimizing allocations
E 238 Feasible set of the mean-variance efficient frontier (www.6.1) *
Show that in the plane (6.29, AM 2005), the budget constraint (6.24, AM 2005) is satisfied by all the
points in the region to the right of the hyperbola (6.30, AM 2005)-(6.31, AM 2005)
Solution of E 238
See E 242.
1 0
,
2
(6.1)
1 0
(0 pT wT ) .
2
(6.2)
We neglect in the Lagrangian the second constraint (6.26, AM 2005), which from (6.22, AM 2005) and
(6.24, AM 2005) reads:
(6.3)
We verify ex-post that the constraint is automatically satisfied. From the first-order conditions on the
Lagrangian we obtain:
= 1 + 1 pT ,
(6.4)
where is a suitable scalar. To compute we notice that the maximization of (6.2) is the same as (6.70,
AM 2005), where the objective is given by M PT + and the constraint is (6.94, AM 2005), with
191
d pT and c wT . Thus the solution must be of the form (6.97, AM 2005). Recalling the definitions
(6.99, AM 2005) of M V and (6.100, AM 2005) of SR respectively, and defining the scalar:
e E {M V }
,
E {SR } E {M V }
(6.5)
wT 1
wT 1 pT
+
(1
)
.
p0T 1
p0T 1 pT
(6.6)
0 1
p ,
wT T
(6.7)
and thus:
= (1 )
wT p0T 1
wT
=
.
1
p0T pT
p0T 1 pT
(6.8)
Substituting this expression back into (6.4) we obtain the optimal allocation:
= 1 +
wT p0T 1 1
pT .
p0T 1 pT
(6.9)
Note. Notice that the optimal allocation (6.9), lies on the efficient frontier. When the risk propensity
is zero we obtain the minimum variance portfolio M V . As the risk propensity tends to infinity,
the solution departs from the "belly" of the hyperbola along the upper branch of the hyperbola, passing
through the maximum Sharpe ratio portfolio SR . The VaR constraint (6.3) is satisfied automatically
if two the confidence required c is not too high and the margin is not too small. Indeed consider the
following align:
(6.10)
This is a straight line through the origin in Figure 6.1 in Meucci (2005). If erf 1 (c) is not larger than
the maximum Sharpe ratio, i.e. the slope of the line through the origin and the portfolio SR , and if
is large enough, then all the portfolios above the straight line on the frontier satisfy the VaR constraint.
These portfolios correspond to the choice (6.7, AM 2005) for suitable choices of the extremes.
193
Solution of E 240
We replace (6.9) in (6.1):
1 0
0
CE( )
2
wT p0T 1 1
= 0 1 +
p
T
p0T 1 pT
0
1
wT p0T 1 1
1 +
T
2
p0T 1 pT
wT p0T 1 1
1
+
pT
p0T 1 pT
wT p0T 1
= 0 1 +
0 1 pT
p0T 1 pT
2
1 wT p0T 1
0 1
p0T 1 pT
2
2
p0T 1 pT
1 wT p0T 1
p0T 1 .
2
p0T 1 pT
(6.11)
Therefore:
1
CE( ) = 0 1 +
2
2
wT p0T 1
p0T 1 pT
0 1 pT
1 (wT p0T 1 )2
.
2
p0T 1 pT
(6.12)
(6.13)
2
1/2 0
1/2 0
1
z argmin
(0) E(0) z + (0) E(0) u(0)
+ v(0) u(0) S(0) u(0)
z
(
Az = a
2
s.t.
1/2 0
1/2
(j) E(j) z + (j) E0(j) u(j)
u(j) S1
(j) u(j) v(j) ,
(6.14)
(6.15)
s.t.
Az = a
1/2 0
1/2
(0) E(0) z + (0) E0(0) u(0)
t
q
1/2 0
1/2
(1) E(1) z + (1) E0(1) u(1)
u(1) S1
(1) u(1) v(1)
..
.
q
1/2 0
1/2
(J) E(J) z + (J) E0(J) u(J)
u(J) S1
(J) u(J) v(J) .
(6.16)
E 242 Feasible set of the mean-variance problem in the space of moments (www.6.3) *
Prove that the feasible set of the mean-variance problem (6.96, AM 2005) in the coordinates (6.101, AM
2005) is the region to the right of the parabola (6.102, AM 2005)-(6.103, AM 2005).
Solution of E 242
Consider the general case where E {M} and d are not collinear. First we prove that any level of expected
value e R is attainable. This is true if for any value e there exists an such that:
e = E { } = 0 E {M}
c = 0 d .
(6.17)
In turn, this is true if we can solve the following system for an arbitrary value of e:
E {Mj }
dj
E {Mk }
dk
j
k
=
P
e n6=j,k n E {Mk }
P
.
c n6=j,k n dn
(6.18)
Since E {M} and d are not collinear we can always find two indices (j, k) such that the matrix on the
left-hand side of (6.18) is invertible. Therefore, we can fix arbitrarily e and all the entries of that appear
on the right hand side of (6.18) and solve for the remaining two entries on the left-hand side of (6.18).
195
Now we prove that if a point (v, e) is feasible, so is any point (v + , e), where is any positive number.
Indeed, if we make any of the entries on the right hand side of (6.18) go to infinity and solve for the
remaining two entries on the left-hand side of (6.18) the variance of the ensuing allocations satisfies the
constraints and tends to infinity. For continuity, all the points between (v, e) and (+, e) are covered.
Therefore the feasible set can only be bounded on the left of the (v, e) plane. To find out if that boundary
exists, we fix a generic expected value e and compute the minimum variance achievable that satisfies the
affine constraint. Therefore, we minimize the following unconstrained Lagrangian:
L(, , ) Var { } (0 d c) (E { } e)
= 0 Cov {M} (0 d c) (0 E {M} e) .
(6.19)
0=
L
= 2 Cov {M} d E {M} ,
(6.20)
L
0=
= 0 E {M} e ,
0=
(6.21)
1
1
Cov {M} d + Cov {M} E {M} .
2
2
(6.22)
The Lagrange multipliers can be obtained as follows. First, we define four scalar constants:
1
E {M}
(6.24)
(6.25)
A d0 Cov {M}
B d0 Cov {M}
0
(6.23)
E {M}
D AC B .
(6.26)
Left-multiplying the solution (6.22) by d0 and using the first constraint in (6.21) we obtain:
c = d0 =
1
1
d Cov {M} d + d0 Cov {M} E {M} = A + B .
2
2
2
2
(6.27)
Similarly, left-multiplying the solution (6.22) by E {M} and using the second constraint in (6.21) we
obtain:
0
e = E {M} =
0
1
0
1
E {M} Cov {M} d + E {M} Cov {M} E {M} = B + C . (6.28)
2
2
2
2
2cC 2eB
,
D
2eA 2cB
.
D
(6.29)
(6.30)
This shows that the boundary v(e) Var { } exists. Collecting the terms in e we obtain its align:
v=
A 2 2cB
c2 C
e
e+
,
D
D
D
(6.31)
which shows that the feasible set is bounded on the left by a parabola. In the space of the coordinates
(d, e) = (Sd { } , E { }) the parabola (6.31) becomes a hyperbola:
d2 =
c2 C
A 2 2cB
e
e+
.
D
D
D
(6.32)
The allocations that give rise to the boundary parabola (6.31) are obtained from (6.22) by substituting
the Lagrange multipliers (6.29):
cC eB
eA cB
1
1
Cov {M} d +
Cov {M} E {M}
D
D
1
1
(cC eB)A Cov {M} d
(eA cB)B Cov {M} E {M}
=
+
.
1
1
D
D
d0 Cov {M} d
d0 Cov {M} E {M}
(6.33)
(6.34)
()
(E { } A cB)B
,
cD
(6.35)
M V
c Cov {M}
d0
Cov {M}
SR
c Cov {M}
d0
Cov {M}
E {M}
(6.36)
d
E {M}
(6.37)
197
E {SR } E {M V } =
cC
cB
cD
=
B
A
AB
(6.38)
cD
cD
D
( cB
)
E { }
A
cD
=
E {SR } E {M V } ( AB
)
(6.39)
E { } E {M V }
,
E {SR } E {M V }
which shows that the upper (lower) branch of the boundary parabola is spanned by the positive (negative)
values of .
Note. To consider the case c = 0 we take the limit c 0 in the above results. The boundary (6.31) of the
feasible set in the coordinates (v, e) = (Var { } , E { }) is still a parabola:
v=
A 2
e ,
D
(6.40)
whereas in the space of coordinates (s, e) = (Sd { } , E { }) the boundary degenerates from the
hyperbola (6.32) into two straight lines:
r
d(e) =
A
e.
D
(6.41)
As for the allocations that generate this boundary, taking the limit c 0 in (6.34) and recalling the
definitions (6.35), (6.36) and (6.37) we obtain:
= lim [M V + ()(SR M V )]
c0
= lim [()(SR M V )]
c0
Cov {M}
= E { }
(A E {M} Bd)
D
1
= () Cov {M} (A E {M} Bd) ,
(6.42)
()
E { }
.
D
(6.43)
The upper (lower) branch of the boundary parabola is spanned by the positive (negative) values of .
> 0.
(6.44)
vM V Var {M V } =
c2
,
A
eM V E {M V } =
cB
.
A
(6.45)
c2 C
,
B2
eSR E {SR } =
cC
,
B
(6.46)
On the other hand the highest Sharpe ratio is the steepness of the straight line tangent to the hyperbola
(6.32), which we obtain by maximizing its analytical expression as a function of the expected value:
SR(e)
e
e
=q
d(e)
A 2
2cB
De D e +
(6.47)
c2 C
D
The first-order conditions with respect to e show that the maximum of the Sharpe ratio is reached at
(6.46).
199
Cov {M}
(E {M} d)
1
d0 Cov {M}
sign(B) > 0 .
E {M}
(6.48)
E 248 Effect of correlation on the mean-variance efficient frontier: total correlation case (www.6.4)
Show that, in the case of a bivariate market with total correlation, the mean-variance efficient frontier
degenerates into a straight line that joins the two coordinates of the two assets in the plane (6.114, AM
2005), i.e. show (6.116, AM 2005).
Hint. In the case of N = 2 assets, the (N 1)-dimensional affine constraint (6.94, AM 2005) determines
a line:
1 =
c
b2
2 ,
b1
b1
(6.49)
c
,
b1
(6.50)
2 ,
e
c
eb b2 .
b1
(6.51)
(6.52)
(6.53)
where Corr {M1 , M2 }. From (6.52) and (6.53) we derive the coordinates of a full allocation in the
first asset, which corresponds to = 0:
e(1) = e
c E {M1 } ,
d(1) = e
c Sd {M1 }
(6.54)
e
c
E {M2 } ,
eb
d(2) =
e
c
Sd {M2 } .
eb
(6.55)
(6.56)
In this notation we can more conveniently re-express the expected value (6.52) of a generic allocation as
follows:
e = e(1) +
eb (2)
(e e(1) ) .
e
c
(6.57)
d =
eb
1
e
c
!2
h
i2
d(1) +
eb
e
c
!2
h
i2
eb
d(2) + 2 1
e
c
eb (1) (2)
d d ,
e
c
(6.58)
Solution of E 248
If = 1, (6.58) simplifies to:
"
2
d =
eb
1
e
c
!
(1)
eb (2)
+
d
e
c
#2
,
(6.59)
d(1)
e
c
,
d(2) d(1) eb
(6.60)
d=
eb
1
e
c
!
d(1) +
eb (2)
d .
e
c
(6.61)
201
e = e(1) + (d d(1) )
e(2) e(1)
,
d(2) d(1)
(6.62)
e = e(1) d(1)
e(2) e(1)
< e(1) .
d(2) d(1)
(6.63)
From (6.60) that this situation corresponds to a negative position in the second asset.
E 249 Effect of correlation on the mean-variance efficient frontier: total anticorrelation case (www.6.4) (see E 248)
Consider the same setup than in 248. Show that, in the case of a bivariate market with total anticorrelation, the mean-variance efficient frontier degenerates into a straight line that joins the two coordinates of the two assets in the plane (6.114, AM 2005), i.e. show (6.118, AM 2005).
Solution of E 249
If = 1, (6.58) simplifies to:
"
2
d =
eb
1
e
c
!
(1)
eb (2)
d
e
c
#2
,
(6.64)
e
c
d(1)
,
(1)
(2)
e
d +d b
(6.65)
d = d(1) +
eb (1)
(d + d(2) ) .
e
c
(6.66)
This expression coupled with (6.57) yield the allocation curve in the case = 1:
e = e(1) + (d + d(1) )
e(2) e(1)
.
d(2) + d(1)
(6.67)
When the allocation is such that (6.65) holds as an equality, we obtain a zero-variance portfolio whose
expected value from (6.57) reads:
e = e(1) + d(1)
e(2) e(1)
> e(1) .
d(1) + d(2)
(6.68)
Note. From (6.65) the allocation in the second asset is positive and from (6.49) so is the allocation in the
first asset:
1 = e
c 1
d(1)
d(1) + d(2)
.
(6.69)
E 250 Total return efficient allocations in the plane of relative coordinates (www.6.5)
Show that the benchmark-relative efficient allocation (6.190, AM 2005) give rise to the portions of the
parabola (6.193, AM 2005) above the coordinate of the benchmark. Show that the hyperbolas are the
same if the benchmark is mean-variance efficient from a total-return point of view.
Solution of E 250
From (6.193, AM 2005) the generic portfolio (6.175, AM 2005) on the efficient frontier satisfies:
Var {e } =
2wT B
w2 C
A
2
E {e }
E {e } + T
D
D
D
(6.70)
(E
+
E
{
})
+
.
e
D
D
Expanding the products and rearranging, we obtain:
2
A
Var
= 2 Cov {e , } + E
e
e
D
2A
2wT B
+ E
E { }
e
D
D
A
2wT B
w2 C
2
+ E { }
E { } + T + Var { } .
D
D
D
(6.72)
From (6.94, AM 2005), (6.99, AM 2005) and (6.100, AM 2005) we obtain for a generic allocation :
E { } E {M V }
=
E {SR } E {M V }
=
=
E { }
wT E{PT + }0 Cov{PT + }1 pT
p0T Cov{PT + }1 pT
E { }
wT C
B
wT E{PT + }0 Cov{PT + }1 pT
p0T Cov{PT + }1 pT
(E { } wTAB )BA
wT D
(E { } + E { })BA wT B 2
.
wT D
(6.73)
Using this result, from (6.175, AM 2005) and the budget constraint 0 pT = wT the covariance reads:
E {e } E {M V }
Cov { , e } = Cov {PT + } M V +
(SR M V )
E {SR } E {M V }
wT 0 pT
E {e } E {M V }
wT 0 E {PT + } wT 0 pT
=
+
A
E {SR } E {M V }
B
A
2
2
(E
+
E
{
})BA
w
B
w
E { } wT
T
e
= T +
.
A
D
B
A
203
(6.74)
#
(E
+ E { })BA wT B 2 E { } wT
wT2
e
Var
= 2
+
e
A
D
B
A
2A
2
A
2wT B
+ E
+ E
E { }
e
e
D
D
D
2
A
2wT B
w C
2
+ E { }
E { } + T + Var { }
D
D
D
2 E
BA E { } wT
2wT2
e
(6.75)
=
A
D
B
A
2 E { } BA E { } wT
2wT B 2 E { } wT
D
B
A
D
B
A
2
2A
A
2wT B
+ E
+ E
E { }
e
e
D
D
D
2
A
2wT B
w C
2
+ E { }
E { } + T + Var { } .
D
D
D
The first-degree terms in E
cancel, and other terms simplify to yield the following expression:
e
2
A
2
2B 2
C
2
Var
=
E
+
w
e
e
T
D
D A
DA
A
2wT B
2
E { } +
E { } + Var { } .
D
D
(6.76)
E { }
e
e
D
D
D
2wT B
+
E { } + Var { } ,
D
(6.77)
2
A
Var
=
E
+ ,
e
e
D
(6.78)
or:
where:
Var { }
A
2wT B
w2 C
2
E { } +
E { } T .
D
D
D
(6.79)
Since the benchmark is not necessarily mean-variance efficient from (6.193, AM 2005) we have that
0 and the equality holds if and only if the benchmark is mean-variance efficient.
Var { } =
A
2
E { } ,
D
(6.80)
(6.81)
From (6.194, AM 2005), (6.99, AM 2005) and (6.100, AM 2005) we obtain for a generic allocation :
E { } E { }
=
E {SR } E {M V }
E { } E { }
0
1
pT
wT E{PT + }0 Cov{PT + }1 E{PT + }
T + } Cov{PT + }
wT E{P
p0T Cov{PT + }1 E{PT + }
p0T Cov{PT + }1 pT
E { } E { }
wT C
wT B
B A
(E { } E { })BA
.
=
wT D
=
(6.82)
Using this result, from (6.190, AM 2005) and the budget constraint 0 pT = wT the covariance reads:
Cov { , } = 0 Cov {PT + } +
E { } E { }
(SR M V )
E {SR } E {M V }
= Var { }
E { } E { }
wT 0 E {pT + } wT 0 pT
E {SR } E {M V }
B
A
(E { } E { })BA E { } wT
= Var { } +
.
D
B
A
+
(6.83)
Var { } =
205
A
2BwT
w2 C
2
E { }
E { } + T + ,
D
D
D
(6.84)
1 + L
(6.85)
= 1 + w0 L ,
where we have used the budget constraint 0 pT = wT and the following identity:
N
X
N
(n)
X
n pT
wn =
= 1.
0 pT
n=1
n=1
(6.86)
(6.87)
(6.88)
argmax
C
w0 () Cov{L}w()=v
w0 () E {L} ,
(6.89)
where w as a function of is obtained by inverting (6.86, AM 2005). On the other hand, it is easier to
convert the constraints C that hold for into constraints that hold for w (for ease of exposition we keep
denoting them as C) and then maximize (6.89) with respect to w:
w(v)
argmax
w0 E {L} ,
(6.90)
wC
w0 Cov{L}w=v
The original mean-variance curve is simply (v) (w(v)) obtained from (6.86, AM 2005).
(6.91)
R
Write a MATLAB
script in which you estimate the matrix and the vector from the time series of
weekly prices in the attached database DB_StockSeries. To do this, shrink the sample mean as in (4.138,
AM 2005), where the target is the null vector and the shrinkage factor is set as 0.1. Similarly, shrink
as in (4.160, AM 2005) the sample covariance to a suitable multiple of the identity by a factor 0.1.
Solution of E 254
R
See the MATLAB
script S_MeanVarianceOptimization.
e Y
()
N 0,
20 + 1.25
10, 000
2 !
,
(6.92)
where denotes the generic time to maturity (measuring time in years). Assume that the bonds and the
stock market are independent. Assume that the current stock prices are the last set of prices in the time
series. Restrict your attention to bonds with times to maturity 4, 5, 10, 52 and 520 weeks, and assume that
R
the current yield curve, as defined in (3.30, AM 2005) in Meucci (2005) is flat at 4%. Write a MATLAB
script in which you:
Produce joint simulations of the four stock and five bond prices at the investment horizon of four
weeks;
Assume that the investor considers as his market one single bond with time to maturity five
weeks and all the stocks;
Determine numerically the mean-variance inputs, namely expected prices and covariance of prices
(not returns);
Determine analytically the mean-variance inputs, namely expected prices and covariance of prices
(not returns) and compare with their numerical counterpart;
Assume that the investors objective is final wealth. Suppose that his budget is wT 100. Assume
that the investor cannot short-sell his securities, i.e. the allocation vector cannot include negative
entries. Compute the mean-variance efficient frontier as represented by a grid of 40 portfolios
207
whose expected values are equally spaced between the expected value of the minimum variance
portfolio and the largest expected value among the portfolios composed of only one security;
Assume that the investors satisfaction is the certainty equivalent associated with an exponential
function:
1
u() e ,
(6.93)
where 10 and compute the optimal allocation according to the two-step mean-variance framework.
Hint. Do not use portfolio weights and returns. Instead, use number of securities and prices. Given the
no-short-sale constraint, the minimum variance portfolio cannot be computed analytically, as in (6.99,
R
AM 2005): use the MATLAB
function quadprog to compute it numerically. Given the no-short-sale
constraint, the frontier cannot be computed analytically, as in (6.97, AM 2005)-(6.100, AM 2005): use
quadprog to compute it numerically.
Solution of E 255
R
See the MATLAB
script S_MeanVarianceOptimization.
u() e ,
(6.94)
where 10. Compute the optimal allocation according to the two-step mean-variance framework;
Repeat the above steps an the investment horizon of four years.
Solution of E 256
R
See the MATLAB
script S_MeanVarianceHorizon.
Determine analytically the mean-variance inputs in terms of weights and returns, namely expected
linear returns and covariance of linear returns and compare with their numerical counterpart.
Solution of E 257
For the benchmark-driven investor, the objective is (6.170, AM 2005) or:
0 KPT + ,
(6.95)
where:
KI
pT 0
,
0 pT
(6.96)
and is the benchmark. In particular, notice that K is singular: the columns of K span a vector space of
dimension N 1. Any vector orthogonal to all the column of K is spanned by the benchmark. Therefore
= 0 KPT + 0, which implies that the benchmark has zero expected outperformance and zero
tracking error, see Figure (6.21, AM 2005). For a budget b, the return-based objective is defined as:
0 KPT +
0 PT +
0 pT 0 PT +
=
=
b
0 pT
0 pT
(0 pT )( 0 pT )
0
0
PT +
PT +
=
1
1
0 pT
0 pT
(6.97)
= L L ,
where L and L are the linear returns of the portfolio and the benchmark, respectively. Given that the
constraints are linear, we resort to the dual formulation of the return-based mean-variance problem, which
reads:
(e)
argmin
0 pT b
0
E{L L }=e
argmin
0 pT b
0
E{L L }=e
argmin
0
pT b
0
E{L L }=e
{Var{L L }}
Using 6.85 we obtain in terms of the portfolio weights w, the benchmark weights wb , and the securities
returns L:
w(e) argmin
w0 11
w0
0
w E{L}=e
1 0
w Cov{L}w wb0 Cov{L}w
2
R
This is the input for the MATLAB
script S_MeanVarianceBenchmark.
.
(6.99)
209
Solution of E 258
R
See the MATLAB
script S_MeanVarianceBenchmark.
(6.100)
and a risky asset whose value Pt follows a geometric Brownian motion with drift and volatility :
2
= ln Pt +
2
ln Pt+t
t + tZt ,
(6.101)
where Zt N(0, 1) are independent across non-overlapping time steps. Assume the current time is t 0
and the investment horizon is t . Assume there is an initial budget:
S0
given .
(6.102)
Consider a strategy that rebalances between the two assets throughout the investment period [0, ]:
(t , t )t[0, ] ,
(6.103)
where t denotes the number of units of the risky asset and t denotes the number of units of the risk-free
asset. The value of the strategy is:
St = t Pt + t Dt ,
(6.104)
(6.105)
(6.106)
wt
t Pt
,
St
ut
t Dt
,
St
(6.107)
Prove that the self-financing constraint (6.106) is equivalent to the weight of the risk-free asset being
equal to:
ut 1 wt ,
(6.108)
and that therefore the whole strategy if fully determined by the free evolution of the weight wt .
Note. See Meucci (2010d).
Solution of E 260
We denote by (wt , ut ) the pre-trade weights and by (w
et , u
et ) the post-trade weights. Dividing both sides
of (6.106) by the value of the strategy we obtain:
wt+t + ut+t =
t Pt+t + t Dt+t
t Pt+t + t Dt+t
=
= 1,
St+t
t Pt+t + t Dt+t
(6.109)
211
and:
w
et+t + u
et+t =
(6.110)
t .
(6.111)
R
Write a MATLAB
script in which you:
Generate the deterministic exponential growth dynamics (6.100) at equally spaced time intervals ;
Generate a large number of Monte Carlo paths from the geometric Brownian motion (6.101) at
equally spaced time intervals [0, t, 2t, . . . , ];
Plot one path for the value of the risky asset {Pt }t=0,t,..., , and overlay the respective path
{St }t=0,t,..., for the value of the buy & hold strategy (6.111);
Plot the evolution of the portfolio weight (6.107) of the risky asset {wt }t=0,t,..., on that path;
Scatter-plot the final payoff of the buy & hold strategy (6.111) over the payoff of the risky asset,
and verify that the profile is linear.
(6.112)
w()
argmax (E {u(S )}) ,
(6.113)
S0 ,w() C
u(s) =
s
,
(6.114)
with < 1.
Hint. The strategy evolves as:
dSt
= (r + wt ( r))dt + wt dBt .
St
(6.115)
Yw()
(6.116)
where Y is a normal:
Yw() N(mw() , s2w() ) ,
(6.117)
mw() r +
0
2 2
( r)wt
w dt
2 t
(6.118)
and variance:
s2w() =
2 wt2 dt .
(6.119)
E {u(S )} =
S0
Y
E{e w() } .
(6.120)
Since Yw() is normally distributed with expected value (6.118) and variance (6.119), it follows that eY
is lognormally distributed and thus from (1.98, AM 2005) we obtain:
E{e
Yw()
}=e
mw() + 2 s2w
()
(6.121)
Therefore, substituting (6.118) and (6.119), the optimal strategy (6.113) solves:
w()
argmax
w()
Z
wt ( r) wt2
2
(1 ) dt .
2
(6.122)
The solution to this problem is the value that maximizes the integrand at each time. Therefore, the solution
is the constant:
w()
w
1 r
.
1 2
(6.123)
213
Solution of E 263
R
See the MATLAB
script S_UtlityMax.
(6.124)
t [0, ] .
(6.125)
At all times t, for any level of the strategy St there is an excess cushion:
Ct max(0, St Ft ) .
(6.126)
According to the CPPI, a constant multiple m of the cushion is invested in the risky asset, therefore
obtaining the dynamic strategys weight for the risky asset:
w wt
mCt
w.
St
(6.127)
R
Write a MATLAB
script in which you:
Generate the deterministic exponential growth dynamics (6.100) at equally spaced time intervals ;
Generate a large number of Monte Carlo paths from the geometric Brownian motion (6.101) at
equally spaced time intervals [0, t, 2t, . . . , ];
Plot one path for the value of the risky asset {Pt }t=0,t,..., , and overlay the respective path
{St }t=0,t,..., for the value of the CPPI strategy (6.127);
Plot the evolution of the portfolio weight (6.107) of the risky asset {wt }t=0,t,..., on that path;
Scatter-plot the final payoff of the CPPI strategy (6.127) over the payoff of the risky asset, and
verify that the profile is convex.
(6.128)
and assume that you can compute the solution G(t, p) of the following partial differential align:
G G
1 2G 2 2
p Gr = 0 ,
+
r+
t
p
2 p2
(6.129)
(6.130)
(6.131)
wt
Pt G(t, Pt )
,
St Pt
(6.132)
(6.133)
Solution of E 265
We want to prove that the following identity holds at all times, and in particular at t :
St G(t, Pt ) t [0, ] .
(6.134)
Indeed, using Itos rule on G((t, Pt ), where Pt follows the geometric Brownian motion (6.101) yields:
G
1 2G
G
dt +
dPt +
(dPt )2
t
Pt
2 Pt2
G
1 2G 2 2
G
dt +
(Pt dt + Pt dB) +
Pt dt
=
t
Pt
2 Pt2
G
G
1 2G 2 2
G
=
+
Pt +
Pt dt +
Pt dBt .
2
t
Pt
2 Pt
Pt
dGt =
(6.135)
215
dPt
dSt = St rdt + St wt
rdt
Pt
G dPt
= St rdt + Pt
rdt
Pt Pt
G
= St rdt +
(Pt dt + Pt dBt Pt rdt) .
Pt
(6.136)
G
G
1 2G 2 2
G
d(Gt St ) =
+
Pt +
Pt dt +
Pt dBt
t
Pt
2 Pt2
Pt
G
St rdt
(Pt dt + Pt dBt Pt rdt)
Pt
G
G 1 2 G 2 2
+
S
r
+
=
P
r
dt .
t
t
t
t
2 Pt2
Pt
(6.137)
Therefore:
(6.138)
(G S ) = (G0 S0 )er .
(6.139)
Which means:
(6.140)
In this context the partial differential align (6.129) was solved in Black and Scholes (1973):
G(t, p) = p(d1 ) er( t) K(d2 ) ,
(6.141)
d2 (t, p) d1 (t, p) t .
1
d1 (t, p)
t
(6.142)
From the explicit analytical expression (6.141) we can derive the expression for the weight (6.132) of the
risky asset:
wt =
Pt
(d1 (t, Pt )) .
St
(6.143)
R
Write a MATLAB
script in which you:
Generate the deterministic exponential growth dynamics (6.100) at equally spaced time intervals ;
Generate a large number of Monte Carlo paths from the geometric Brownian motion (6.101) at
equally spaced time intervals [0, t, 2t, . . . , ];
Plot one path for the value of the risky asset {Pt }t=0,t,..., , and overlay the respective path
{St }t=0,t,..., for the value of the option replication strategy (6.143);
Plot the evolution of the portfolio weight of the risky asset {wt }t=0,t,..., on that path;
Scatter-plot the final payoff of the option replication strategy (6.143) over the payoff of the risky
asset, and verify that it matches the option payoff.
Solution of E 266
R
See the MATLAB
script S_OptionReplication.
Chapter 7
(7.1)
p
where qN
is the square root of the quantile of the chi-square distribution with N degrees of freedom
relative to a confidence level p:
p
qN
q
Q2N (p) .
(7.2)
Solution of E 267
Consider the spectral decomposition (3.149, AM 2005) of the covariance matrix:
EE0 ,
(7.3)
where is the diagonal matrix of the respective eigenvalues sorted in decreasing order:
diag(1 , . . . , N ) .
(7.4)
and the matrix E is the juxtaposition of the eigenvectors, which represents a rotation:
E (E(1) , . . . , E(N ) ) .
(7.5)
Y 2 E0 (X ) .
217
(7.6)
(7.7)
Yn2 2N .
(7.8)
n=1
Yn2 = Y0 Y
n=1
i0 h
i
h
1
1
= 2 E0 (X ) 2 E0 (X )
(7.9)
= (X )0 E1 E0 (X )
= (X )0 1 (X ) .
Therefore for the Mahalanobis distance (2.61, AM 2005) of the variable X from the point through the
metric we obtain:
Ma2 (X, , ) (X )0 1 (X ) 2N ,
(7.10)
(7.11)
By applying the quantile function (1.17, AM 2005) to both sides of the above equality we obtain:
p 2
p = P (X )0 1 (X ) (qN
) ,
(7.12)
p
where qN
is the square root of the quantile of the chi-square distribution with N degrees of freedom
relative to a confidence level p:
p
qN
q
Q2N (p) .
(7.13)
(7.14)
n
p o
qN
P X E,
= p.
(7.15)
we obtain:
219
(7.16)
| N(0 , (T0 )1 ) .
(7.17)
and:
Thus from (2.156, AM 2005) and (2.224, AM 2005) the joint prior pdf of and is:
fpr (, ) = fpr (|)fpr ()
1
= 1 || 2 e 2 (0 ) (T0 )(0 ) |0 |
0
2
||
0 N 1
2
e 2 tr(0 0 ) .
(7.18)
As for the pdf of current information (7.13, AM 2005), from (4.102, AM 2005) the sample mean is
normally distributed:
b N(, [T ]1 ) ,
(7.19)
and from (4.103, AM 2005) the distribution of the sample covariance is:
b W(T 1, ) ,
T
(7.20)
and these variables are independent. Therefore from (2.156, AM 2005) and (2.224, AM 2005) the pdf
b conditioned on
of current information from time series f (iT |, ) as summarized by iT (b
, T )
knowledge of the parameters (, ) reads:
1
b
b
(T )()
f (iT |, ) = 2 || 2 e 2 ()
||
T 1
2
T N2 2
1
b
b
e 2 tr(T ) .
(7.21)
Thus, after trivial regrouping and simplifications, the joint pdf of current information and the parameters
reads:
f (iT , , ) = f (iT |, )fpr (, )
T N2 2
0
T +0 N
0
1
1
b
b 0 (T )()
b } b
= 3 e 2 {(0 ) (T0 )(0 )+()
|0 | 2 || 2
e 2 tr(T +0 0 ) .
(7.22)
After expanding and rearranging, the terms in the curly brackets in the second row can be re-written as
follows:
{ } = ( 1 )0 T1 ( 1 ) + tr() ,
(7.23)
T1 T0 + T
b
T0 0 + T
1
T0 + T
T T0
(b
0 )(b
0 )0 .
T0 + T
(7.24)
where:
(7.25)
(7.26)
Therefore, defining:
b + 0 0 +
T
,
1
(7.27)
where 1 is a number yet to be defined, we can re-write the joint pdf (7.22) as follows:
T N2 2
0
T +0 N
0
1
1
b
f (iT , , ) = 3 e 2 (1 ) T1 (1 )
|0 | 2 || 2
e 2 tr(1 1 ) .
(7.28)
At this point we can perform the integration over (, ) to find the marginal pdf f (iT ):
Z
f (iT ) =
f (iT , , )dd
Z Z
1
21 (1 )0 T1 (1 )
2
= 4
5 || e
d
T N2 2
0
T +0 N 1
1
b
2
|0 | 2 ||
e 2 tr(1 1 ) d
Z T N 2
0
T +0 N 1
1
b 2
2
|0 | 2 ||
= 4
e 2 tr(1 1 ) d ,
(7.29)
where we have used the fact that the term in curly brackets is the integral of a normal pdf (2.156, AM
2005) over the entire space and thus sum to one. Defining now:
1 T + 0 ,
(7.30)
1
1 N 1
b
21
21 tr(1 1 )
2
2
2
|0 | |1 |
7 |1 | ||
e
d
f (iT ) = 6
T N2 2
0
b
1
= 6
|0 | 2 |1 | 2 ,
(7.31)
where we have used the fact that the term in curly brackets is the integral of a Wishart pdf (2.224, AM
2005) over the entire space and thus sum to one. Finally, we obtain the posterior pdf (7.15, AM 2005) by
dividing the joint pdf (7.28) by the marginal pdf (7.31):
fpo (, )
221
f (iT , , )
f (iT )
1
2
= 7 || e
(7.32)
12 (1 )0 T1 (1 )
|1 |
1
2
||
1 N 1
2
12 tr(1 1 )
From (2.156, AM 2005) and (2.224, AM 2005) we see that this means:
1
| N(1 , [T1 ]
),
(7.33)
W(1 , (1 1 )1 ) .
(7.34)
| N 1 ,
,
T1
(7.35)
and:
In other words:
and:
1
W 1 , 1
1
.
(7.36)
(7.37)
and:
1
| N(1 , [T1 ]
).
(7.38)
(0 , vech [] )0 .
(7.39)
From (2.156, AM 2005) and (2.144, AM 2005) the joint NIW (normal-inverse-Wishart) pdf of and
reads:
f () = f (|)f () = 1 ||
1 N
2
e 2 tr(1 1 ) e
T1
2
(1 )0 (1 )
(7.40)
h i0 0
e
e 0 , vech
,
(7.41)
we impose the first-order conditions on the logarithm of the joint pdf (7.40):
ln f 2 +
1 N
1
ln || tr {1 1 + T1 ( 1 )( 1 )0 } .
2
2
(7.42)
d ln f
(7.43)
Therefore:
d ln f tr {G d} + tr {G d} ,
(7.44)
where:
1
(1 N )1 1 1 T1 ( 1 )( 1 )0
2
G T1 ( 1 )0 .
(7.45)
(7.46)
Using (A.120, AM 2005) and the duplication matrix (A.113, AM 2005) to get rid of the redundancies of
d in (7.44) we obtain:
0
0
d ln f = vec [G0 ] DN vech [d] + vec G0 vec [d] .
(7.47)
(7.48)
(7.49)
223
Applying the first-order conditions to (7.48) and (7.49) we obtain the mode of the location parameter:
e 1 ,
(7.50)
1
1 .
1 N
(7.51)
d(d ln f ) =
(7.52)
The first term can be expressed using (A.107, AM 2005), (A.106, AM 2005) the duplication matrix
(A.113, AM 2005) to get rid of the redundancies of d as follows:
0
tr 1 (d)1 d = vec [d] vec 1 (d)1
= vec [d] (1 1 ) vec [d]
= vec [d] (1 1 ) vec [d]
(7.53)
d(d ln f )|,
e =
e
Therefore from (A.117, AM 2005) and (A.121, AM 2005) and substituting back (7.51) we obtain:
(7.54)
1 N 1
2 ln f
= T1
1
0
,
1
e
e
2 ln f
= 0(N (N +1)/2)2 N 2
0
vech [] ,
e
e
2 ln f
1 12
=
D0 (1 1 )DN .
vech() vech()0 ,
2 1 N N
e
e
(7.55)
(7.56)
(7.57)
(7.58)
where:
1
1
1
T1 1 N
2 1 N
1
S
[D0N (1 1 )DN ] .
1 1
S
(7.59)
(7.60)
(7.61)
f () =
1
+N +1
1
1
1
1 2
|1 1 | 2 ||
e 2 tr(1 1 ) .
(7.62)
To determine the mode of this distribution we impose the first-order conditions on the logarithm of the
joint pdf (7.62).
ln f 2
1 + N + 1
1
ln || tr 1 1 1 .
2
2
Computing the first variation and using (A.124, AM 2005) and (A.126, AM 2005):
(7.63)
225
1 + N + 1
tr(1 d)
2
1
+ tr 1 1 1 (d)1
2
1
1
1
1
= tr
(1 1 (1 + N + 1) )d
2
(7.64)
1
1 1 1 1 (1 + N + 1)1 .
2
(7.65)
d ln f =
where:
Using (A.120, AM 2005) and the duplication matrix (A.113, AM 2005) to get rid of the redundancies of
d we obtain:
0
(7.66)
(7.67)
1
vech [1 ] .
1 + N + 1
(7.68)
d(d ln f ) =
(7.69)
Using (A.107, AM 2005) and (A.106, AM 2005) and the duplication matrix (A.113, AM 2005) to get rid
of the redundancies of d we can write:
a tr((d)1 (d)1 )
0
= vec [d] vec 1 (d)1
0
(7.70)
(7.71)
d(d ln f ) = 1 b +
1 + N + 1
0
a = vech [d] H vech [d] ,
2
(7.72)
where:
H 1 D0N ((1 1 1 ) 1 )DN +
1 + N + 1 0
DN (1 1 )DN .
2
(7.73)
We are interested in the Hessian evaluated in the mode, where (7.68) holds, i.e. in the point
1
1 .
1 + N + 1
(7.74)
In this point:
0
(7.75)
where:
3
1 + N + 1
1
D0N (1
1 1 )DN
1
2
1 + N + 1 1 + N + 1
1
+
D0N (1
1 1 )DN
2
1
1 (1 + N + 1)3 0
1
=
DN (1
1 1 )DN .
2
12
H|Mod 1
Therefore:
(7.76)
227
1 (1 + N + 1)3 0
2 ln f []
1
=
DN (1
1 1 )DN .
0
2
12
vech [] vech [] Mod
(7.77)
1
2 ln f []
0
vech [] vech [] Mod
(7.78)
212
1
=
.
(D0 (1 1
1 )DN )
(1 + N + 1)3 N 1
(7.79)
| N(0 , (T0 )1 ) ,
(7.80)
and:
From (2.156, AM 2005) and (2.224, AM 2005) the joint prior pdf of and is:
f (, ) = f (|)f ()
1
= 1 || 2 e 2 (0 ) (T0 )(0 ) |0 |
0
2
||
0 N 1
2
e 2 tr(0 0 ) .
(7.81)
To determine the unconditional pdf of we have to compute the marginal in (1.101, AM 2005). Defining:
2 0 0 + T0 ( 0 )( 0 )0 ,
(7.82)
we obtain:
Z
f ()
f (, )d
Z
1 |0 |
= 2 |0 |
= 2 |0 |
0
2
0
2
0
2
||
0 N
2
|2 |
0 +1
2
|2 |
0 +1
2
e 2 tr(2 ) d
Z
0 +1
0 N
1
3 |2 | 2 || 2 e 2 tr(2 ) d
,
(7.83)
where we have used the fact that the term in curly brackets is the integrals of the Wishart pdf (2.224, AM
2005) over the entire space and thus it sums to one. Thus substituting again (7.82) we obtain that the
marginal pdf (7.83) reads:
f () = 2 |0 |
0
2
|0 0 + T0 ( 0 )( 0 )0 |
0 +1
2
(7.84)
02+1
|| I + 1 vv0
0 +1
1
= || 2 I + 1 vv0 2
(7.85)
0
2
| + vv0 |
0 +1
2
= ||
0
2
12
= ||
+1
02
(1 + v0 1 v)
(7.86)
0 0 ,
(7.87)
02+1
1 + 1 ( 0 )0 ( 0 )1 ( 0 )
.
0
T0
(7.88)
By comparison with (2.188, AM 2005) we see that this is a multivariate Student t distribution with the
following parameters:
0
St 0 , 0 ,
T0
.
(7.89)
(7.90)
B| N B0 , , 1
,
T0 F,0
(7.91)
and:
229
or, in terms of 1 :
B| N B0 , (T0 )1 , 1
F,0 .
(7.92)
Thus from (2.182, AM 2005) and (2.224, AM 2005) the joint prior pdf of B and is:
fpr (B, ) = fpr (B|)fpr ()
K
0
2
||
0 N 1
2
e 2 tr(0 0 ) .
(7.93)
The current information conditioned on the parameters B and is summarized by the OLS factor loadings and sample covariance:
n
o
b
b .
iT B,
(7.94)
b is distributed as follows:
In (4.129, AM 2005) we show that B
b 1
b
,
B N B, , F
T
(7.95)
b is distributed as follows:
and in (4.130, AM 2005) we show that T
b W(T K, ) .
T
(7.96)
b and T
b are independent. Therefore, from (2.182, AM 2005) and (2.224,
Furthermore, we show that B
AM 2005) we have:
b
b
f (iT |B, ) = f (B|B,
)f (T |B,
)
1
N
T KN
K
0
1
2
b
b F (BB)
b
b
b
} || T K
2
b F 2 e 12 tr{(T )(BB)
= 2 |T | 2
e 2 tr(T ) .
T
(7.97)
Thus, after trivial regrouping and simplifications, the joint pdf of current information and the parameters
reads:
f (iT , B, ) = f (iT |B, )fpr (B, )
1
N2 T KN
0
T +K+0 N 1
N
2
b b
2
= 3
|F,0 | 2 |0 | 2 ||
F
0
e 2 tr((T +0 0 )) .
b
We show below that the terms in curly brackets in (7.98) can be re-written as follows:
(7.98)
{ } = T1 (B B1 )F,1 (B B1 )0 + ,
(7.99)
where:
T1 T0 + T
(7.100)
bF
T0 F,0 + T
T1
b
b F )(T0 F,0 + T
b F )1
B1 (B0 T0 F,0 + BT
F,1
(7.101)
(7.102)
and:
b
bFB
b0
B0 T0 F,0 B00 + BT
b
b F )(T0 F,0 + T
b F )1 (T0 F,0 B0 + T
bFB
b 0) .
(B0 T0 F,0 + BT
0
(7.103)
= B(D + C)B +
B0 DB00
b B
b 0 2BCB
b0
2BDB00 + BC
B1 (D +
C)B01
+ 2B(D +
(7.104)
C)B01
= (B B1 )(D + C)(B B1 )0
b B
b 0 2BDB0 2BCB
b0
+ B0 DB00 + BC
0
B1 (D + C)B01 + 2B(D + C)B01 ,
defining:
b
B1 (B0 D + BC)(D
+ C)1 ,
(7.105)
a = (B B1 )(D + C)(B B1 )0
b B
b 0 (B0 D + BC)(D
b
b 0,
+ B0 DB0 + BC
+ C)1 (B0 D + BC)
(7.106)
(7.107)
(7.108)
231
we obtain:
f (iT , B, ) = f (iT |B, )fpr (B, )
1
N2 T KN
0
T +K+0 N 1
N
2
b b
2
= 3
|F,0 | 2 |0 | 2 ||
F
0
(7.109)
At this point we can perform the integration over (B, ) to determine the marginal pdf f (iT ):
Z
f (iT ) =
f (iT , B, )dBd
Z Z
K
N
0
1
= 4
5 || 2 |F,1 | 2 e 2 tr{T1 (BB1 )F,1 (BB1 ) } dB
N2
b
F
1
T KN
N
2
b
N
|F,1 | 2 |F,0 | 2
1 N 1
2
0
2
(7.110)
21 tr( 1 1 )
d
|0 | ||
e
Z N T KN 1
N
2
b 2 b
N
= 4
|F,1 | 2 |F,0 | 2
F
0
2
|0 |
||
1 N 1
2
e 2 tr( 1 1 ) d ,
where we used the fact that the expression in curly brackets is the integral of the pdf of a matrix-valued
normal distribution (2.182, AM 2005) over the entire space and thus sums to one. Thus we can write
(7.110) as follows:
Z
f (iT ) = 5
6 |1 |
N2
b
F
N2
b
= 5
F
1
2
||
1 N 1
2
1
e 2 tr(1 1 ) d
1
T KN
0
N
2
b
N
1
|F,1 | 2 |F,0 | 2 |0 | 2 |1 | 2
1
T KN
0
N
2
b
1
N
|F,1 | 2 |F,0 | 2 |0 | 2 |1 | 2 ,
(7.111)
where we used the fact that the term in curly brackets is the integral of the pdf of a Wishart distribution
(2.224, AM 2005) over the entire space and thus sums to one. Finally, we obtain the posterior pdf (7.15,
AM 2005) by dividing the joint pdf by the marginal pdf:
fpo (B, )
f (iT , B, )
f (iT )
N
= 7 |F,1 | 2 |1 |
1
2
||
K+1 N 1
2
0
1
2
||
1 N 1
2
e 2 tr(1 1 ) .
(7.112)
B| N B1 , (T1 )1 , 1
F,1 ,
(7.113)
W 1 , (1 1 )1 .
(7.114)
1
1
W 1 ,
,
1
(7.115)
and:
and:
B| N B1 , , 1
T1 F,1
.
(7.116)
(7.117)
(7.118)
B| N(B1 , (T1 )1 , 1
F,1 ) .
(7.119)
and:
From (2.182, AM 2005) and (2.224, AM 2005) the joint NIW (normal-inverse-Wishart) pdf of B and
reads:
f (B, ) = f (B|)f ()
N
= 1 |F,1 | 2 |1 |
1
2
||
1 +KN 1
2
(7.120)
h i0
h i0 0
e , vech
e
vec B
,
233
(7.121)
we impose the first-order condition on the logarithm of the joint pdf (7.120):
ln f 2 +
1 + K N 1
ln ||
2
1
tr {[(B B1 )T1 F,1 (B B1 )0 + 1 1 ] } .
2
(7.122)
d ln f
1
tr (1 + K N 1)1 a d
2
tr {T1 F,1 (B B1 )0 dB} ,
(7.123)
where:
a (B B1 )T1 F,1 (B B1 )0 + 1 1 .
(7.124)
d ln f tr {G d} + tr {GB dB} ,
(7.125)
Therefore:
where:
1
(1 + K N 1)1 a
2
GB T1 F,1 (B B1 )0 .
(7.126)
(7.127)
Using (A.120, AM 2005) and the duplication matrix (A.113, AM 2005) to get rid of the redundancies of
d in (7.125) we obtain:
0
(7.128)
(7.129)
(7.130)
Applying the first-order conditions to (7.129) and (7.130) and re-substituting (7.124) we obtain the mode
of the factor loadings:
e B1 ,
B
(7.131)
1
1 .
1 + K N 1
(7.132)
d(d ln f ) =
(7.133)
The first term in (7.133) can be expressed using (A.107, AM 2005), (A.106, AM 2005) the duplication
matrix (A.113, AM 2005) to get rid of the redundancies of d as follows:
0
tr 1 (d)1 d = vec [d] vec 1 (d)1
= vec [d] (1 1 ) vec [d]
= vec [d] (1 1 ) vec [d]
(7.134)
(7.135)
235
1
1 + K N 1
e
e 1 DN vech [d]
vech [d] D0N
2
0
e F,1 )KKN vec [dB] .
T1 vec [dB] KN K (
(7.136)
d(d ln f )|B,
e
e =
Therefore from (A.117, AM 2005) and (A.121, AM 2005) and substituting back (7.132) we obtain:
1 + K N 1
2 ln f
= T1
KN K (1
1 F,1 )KKN
0
1
vec [B] vec [B] B,
e
e
2 ln f
= 0(N (N +1)/2)2 (N K)2
0
vech [] vec [B] B,
e
e
1
12
2 ln f
=
D0 (1 1 )DN .
vech() vech()0 B,
2 1 + K N 1 N
e
e
(7.137)
MDis {}
1
2 ln f
SB
=
0(N (N +1)/2)2 (N K)2
0 e
,
(7.138)
(7.139)
(7.140)
(7.141)
B| N(B0 , (T0 )1 , 1
F,0 ) .
(7.142)
and:
From (A.182, AM 2005) and (A.224, AM 2005) the joint prior pdf of B and is:
f (B, ) = f (B|)f ()
N
0
2
= 1 |F,0 |
e
||
N
2
0 N 1
2
|0 |
0
2
e 2 tr(0 0 )
||
(7.143)
0 +KN 1
2
To determine the unconditional pdf of B we have to compute the marginal in (7.143). Defining:
2 (B B0 )T0 F,0 (B B0 )0 + 0 0 ,
(7.144)
we obtain:
Z
f (B)
f (B, )d
Z
1 |F,0 | 2 |0 |
= 2 |F,0 |
N
2
|0 |
= 2 |F,0 | 2 |0 |
0
2
0
2
0
2
||
0 +KN 1
2
0 +K
2
0 +K
2
|2 |
|2 |
e 2 tr{2 } d
Z
3 |2 |
0 +K
2
||
0 +KN 1
2
12 tr{2 }
d
(7.145)
where we have used the fact that the term in curly brackets is the integrals of the Wishart pdf (2.224, AM
2005) over the entire space and thus it sums to one. Thus substituting again (7.144) we obtain that the
marginal pdf (7.145) reads:
N
f (B) = 2 |F,0 | 2 |0 |
0
2
|0 0 + (B B0 )T0 F,0 (B B0 )0 |
0 +K
2
(7.146)
||
0
2
| + vv0 |
0 +K
2
0 +K
2
|| IN + 1 vv0
0 +K
K
2
= || 2 IN + 1 vv0
0 +K
K
2
= || 2 IK + v0 1 v
.
= ||
0
2
(7.147)
(7.148)
0 0 ,
(7.149)
(7.150)
237
K
2
f (B) = 4 |F,0 | 2 |0 0 |
0 +K
IN + (0 0 )1 (B B0 )T0 F,0 (B B0 )0 2 .
(7.151)
N
)(
)
0
0
0
(B B0 )
.
IK + T0 F,0 (B B0 )0
(0 + K N )
(7.152)
Comparing with (2.199, AM 2005) we see that this is the pdf of a matrix-variate Student t distribution
with the following parameters:
0
0 , (T0 F,0 )1
B St 0 + K N, B0 ,
0 + K N
.
(7.153)
(7.154)
C2 : g G g ,
(7.155)
and:
where G is a K N matrix and (g , g) are K-dimensional vectors. From (7.91, AM 2005) we obtain the
allocation function:
(, ) argmax
0 diag(pT )(1 + )
0 pT =wT
gGg
1 0
diag(pT ) diag(pT )
2
.
(7.156)
We can solve this problem by means of Lagrange multipliers. We define the Lagrangian:
1 0
diag(pT ) diag(pT )
2
0 pT ( )0 B ,
L 0 diag(pT )(1 + )
(7.157)
where is the multiplier relative to the equality constraint 0 pT = wT and (, ) are the multipliers
relative to the additional inequality constraints (7.155) and satisfy the Kuhn-Tucker conditions:
, 0
N
X
k Gkn g n =
n=1
(7.158)
N
X
k Gkn g n = 0,
k = 1, . . . , K .
(7.159)
n=1
Therefore, defining:
e [diag(pT )]1 G0 ( ) ,
(7.160)
1 0
diag(pT ) diag(pT ) .
2
(7.161)
This is the Lagrangian of the optimization (7.156) with the constraints (7.154) but without the constraints
(7.155). Its solution is (6.39, AM 2005). After substituting (7.90, AM 2005) in that expression we obtain
the respective allocation function:
(e
, ) 7 (e
, ) [diag(pT )]
e
wT 10 1
e+
1
1
.
10 1 1
(7.162)
This can be inverted, by pinning down specific values for the covariance matrix and solving the ensuing
implicit align:
1
10
1
e
1 1 =
0
1 1
diag(pT )
wT
1
10
(7.163)
(7.164)
and from (7.160) the implied returns that include the constraints (7.155) read:
1
e () + [diag(pT )]
c =
G0 ( ) .
(7.165)
239
Solution of E 279
In our example (7.91, AM 2005), consider an investor who has no risk propensity, i.e. such that
0 in his exponential utility function. Then the quadratic term becomes overwhelming in the index of
satisfaction, which becomes independent of the expected returns:
CE ()
1 0
diag(pT ) diag(pT ) .
2
(7.166)
(7.167)
C2 : 0 .
(7.168)
(7.169)
,,
where:
L(, , ) 0 diag(pT ) diag(pT ) 0 pT 0 .
(7.170)
(7.171)
n = 0 n > 0 .
(7.172)
(7.173)
(7.174)
(7.175)
reads:
l(iT |)
T
Y
fX (Xt |) .
(7.176)
t=1
Assume a prior for the parameters f0 (). Then posterior distribution of the parameters reads:
(|iT ) R
l(iT |)f0 ()
,
l(iT |)f0 ()d
(7.177)
see (7.15, AM 2005). To generate samples from the posterior distribution we use the Metropolis-Hastings
algorithm.
First we select a one-parameter family of candidate-generating densities q(, ), which satisR
fies q(, )d = 1. Then we define the function:
(, ) min
l(iT |)f0 ()q(, )
,1 .
l(iT |)f0 ()q(, )
(7.178)
In particular, if we choose a symmetric function q(, ) = q(, ), this function simplifies to:
(, ) min
l(iT |)f0 ()
,1 .
l(iT |)f0 ()
(7.179)
(7.180)
Xt N(, ) ,
241
(7.181)
0
0 ,
0
1
12
13
12
1
23
13
23 .
1
(7.182)
In this situation the joint distribution of the returns is fully determined by three parameters:
(12 , 13 , 23 )0 .
(7.183)
(7.184)
Since is positive definite, the domain must be a proper subset of (1, 1) (1, 1) (1, 1). For
instance, (0.9, 0.9, 0.9)0 is not a feasible value. Assume an uninformative uniform prior for the
correlations. In other words, assume that is uniformly distributed on its domain:
U() .
(7.185)
R
Write a MATLAB
script in which you generate 10,000 simulations from (7.185). In three subplots
plot the histograms of 12 , 13 and 23 respectively, showing how the uniform prior implies non-uniform
marginal distributions on each of the correlations.
Hint. Generate a uniform distribution on (1, 1)3 then discard the simulations such that is not positive
definite.
Solution of E 281
R
script S_CorrelationPriorUniform.
See the MATLAB
Subplot the histogram of the marginal distribution of the posterior of and superimpose the profile
of its analytical pdf. Then subplot the histogram of the marginal distribution of the posterior of
1/ 2 and superimpose the profile of its analytical pdf;
Check that (7.4, AM 2005) holds by changing the relative weights of 0 , T0 with respect to T .
Solution of E 282
R
See the MATLAB
script S_AnalyzeNormalInverseWishart.
Chapter 8
Evaluating allocations
E 283 Optimal allocation as function of invariant parameters (www.8.1)
Prove the expressions (8.33, AM 2005)-(8.34, AM 2005).
Solution of E 283
Replacing the market parameters (8.21, AM 2005) in the certainty-equivalent (8.25, AM 2005) we obtain:
S = 0 diag(pT )(1 + )
1 0
diag(pT ) diag(pT ) .
2
(8.1)
Substituting in this expression the optimal allocation (8.32, AM 2005), which we report here:
= [diag(pT )]
1 +
wT 10 1
1
[diag(pT )] 1 1 ,
10 1 1
(8.2)
we obtain:
wT 10 1 1
1
S = (1 + ) +
1
10 1 1
1
wT 10 1 1
wT 10 1 0
1
0
1
+
1
+
2
10 1 1
10 1 1
wT 10 1
= (1 + )0 1 +
(1 + )0 1 1 .
10 1 1
0
(8.3)
Defining:
A 10 1 1 ,
B 10 1 ,
243
C 0 1 ,
(8.4)
wT B
wT B
S = B + C +
A+
B
A
A
1
wT B
wT B
wT B 2
2
C +
B+
B+(
) A
2
A
A
A
wT B
= B + C + wT B +
B
A
1
wT B
2BwT + B 2
1 wT2
C + 2
B+
2
A
A
2 A
1
B2
B
1 wT
= C
+ wT 1 +
.
2
A
A 2 A
(8.5)
b W(T 1, ) .
T
(8.6)
(8.7)
(8.8)
where Ga denotes the gamma distribution. Thus from (1.113, AM 2005) the expected value of vb reads:
E {b
v} =
T 1 0
diag(pT ) diag(pT ) ,
T
(8.9)
T 1 0
diag(pT ) diag(pT ) .
T2
(8.10)
Similarly, define:
b) .
eb 0 diag(pT )(1 +
From (2.163, AM 2005) we obtain:
(8.11)
0 diag(pT ) diag(pT )
eb N diag(pT )(1 + ),
T
245
.
(8.12)
(8.13)
0 diag(pT ) diag(pT )
.
T
(8.14)
Chapter 9
Optimizing allocations
E 286 Allocation of the resampled allocation (www.9.1) *
Prove expression (9.92, AM 2005) for the resampled allocation.
Solution of E 286
We can express the resampled allocation as:
n h
io
b T]
b
[i
],[i
rs [iT ] E s IT T
n
o
n w
o
T
1
1
= E [diag(pT )] Vu + E
[diag(pT )] V1
0
1V1
0
1 Vu
1
E
[diag(pT )] V1
10 V1
V1
1
1
= [diag(pT )] E {V} E {u} + wT [diag(pT )] E
10 V1
V1
1
0
[diag(pT )] E {u} E
V1
10 V1
V1
1
1
b + wT [diag(pT )] E
= [diag(pT )] E {V}
10 V1
V1
1 0
b E
[diag(pT )]
V1 .
10 V1
(9.1)
Therefore:
rs [iT ] = [diag(pT )]
0
V1
1 Vb
V1
+ wT E
.
E {Vb
} E
10 V1
10 V1
(9.2)
247
,
T
(9.3)
where t and t are the true underlying parameters. Therefore from (9.53, AM 2005) we have:
(b
t )0
t
T
1
(b
t ) 2N .
(9.4)
F2N (T ) P (b
)
t
T
1
)
t
(b
) T
(9.5)
b )0 (t )1 (t
b) .
= P (t
By applying the quantile function (1.17, AM 2005) to both sides of the above equality we obtain:
t 1
b)
p = P (
b)
(
Q2N (p)
.
(9.6)
b , t )
RN | Ma2 (t ,
Q2N (p)
T
,
(9.7)
(9.8)
(9.9)
(9.10)
fX|v (X|V)
where fX,V is the joint distribution of X and V and:
Z
fV (v)
is the marginal pdf of V. On the other hand, by the definition of the conditional density we also have:
fX,V (x, v) = fV|g(x) (v|x)fX (x) .
(9.11)
Thus:
fX|v (x|v) R
(9.12)
fX (x)
||
(2)
e 2 (x)
N
2
(x)
(9.13)
fV|Px (v|x)
||
(2)
K
2
e 2 (vPx)
(vPx)
(9.14)
||
12
||
e 2 [(x)
1
(x)+(vPx)0 1 (vPx)]
(9.15)
.
(9.16)
(9.17)
e (v) (1 + P0 1 P)1 (1 + P0 1 v) .
(9.18)
This implies:
249
e 0 (1 + P0 1 P)e
+
+ 0 1 + v0 1 v
e 0 (1 + P0 1 P)e
(9.19)
e )0 (1 + P0 1 P)(x
e) + .
= (x
where:
e 0 (1 + P0 1 P)e
0 1 + v0 1 v
.
(9.20)
(9.21)
(9.22)
(9.23)
Therefore:
e
= v0 ( + PP0 )1 v 2v0 ( + PP0 )1 v
e 0 ( + PP0 )1 v
e + 0 (1 1 (1 + P0 1 P)1 1 )
+v
e 0 ( + PP0 )1 v
e
v
(9.24)
e )0 ( + PP0 )1 (v v
e) + ,
= (v v
where:
e 0 ( + PP0 )1 v
e.
0 (1 1 (1 + P0 1 P)1 1 ) v
From (9.23) we see that:
(9.25)
(9.26)
does not depend on either V or X. Therefore neither does in (9.25). Substituting (9.24) back in (9.19)
the expression in square brackets in (9.15) reads:
e (v))0 (1 + P0 1 P)(x
e (v))
[ ] = (x
(9.27)
e )0 ( + PP0 )1 (v v
e) + .
+ (v v
Therefore (9.15) becomes:
21
fX,V (x, v) ||
21
||
e
(
e 2 (x(v))
e
+P0 1 P)(x(v))
0 1
| + PP0 |
0 1
e 2 (vev) (+PP )
(ve
v)
(9.28)
(9.29)
(9.30)
= | + PP0 | .
To summarize, from (9.28) we see that:
fX,V (x, v) fX|v (x|v)fV (v) ,
(9.31)
1
0
1
e
e
(1 +P0 1 P)(x(v))
fX|v (x|v) 1 + P0 1 P 2 e 2 (x(v))
,
(9.32)
where:
and:
12
fV (v) | + PP0 |
0 1
e 2 (vev) (+PP )
(ve
v)
(9.33)
Since (9.32) and (9.33) are normal pdfs, it follows that the random variable X conditioned on V = v is
normally distributed:
251
e ,
X|v = v N(e
, )
(9.34)
e ,
V N(e
v, )
(9.35)
= ( P0 (PP0 + )1 P)(1 + P0 1 v)
(9.36)
(9.37)
which can be easily checked by left-multiplying both sides by (PP0 + ), the expression for the
e (v) in (9.34) can be further simplified as follows:
expected value
e (v) = + P0 (PP0 + )1 (v P) .
(9.38)
Similarly, from (9.32) and using (A.90, AM 2005) the covariance matrix in (9.34) reads:
e (1 + P0 1 P)1 = P0 (PP0 + )1 P .
(9.39)
(9.40)
Q
P
,
(9.41)
where Q is an arbitrary full-rank (N K) N matrix. It will soon become evident that the choice of
Q is irrelevant. Then we compute the pdf of the following random variable:
Y SX =
QX
PX
YA
YB
(9.42)
(9.43)
where:
A
B
Q
P
,
(9.44)
and:
T
TAA
TBA
TAB
TBB
QQ0
PQ0
QP0
PP0
.
(9.45)
At this point we can compute the conditional pdf. From (2.164, AM 2005) we obtain:
YA |yB N(, ) ,
(9.46)
TAB T1
BB TBA
(9.47)
(9.48)
E {YA |YB = v}
v
Q + QP0 (PP0 )1 (v P)
v
E {Y|YB = v} =
=
(9.49)
.
(9.50)
253
(9.51)
which is the expression of the conditional expectation that we were looking for.
Cov {YA |YB = v} 0
0
0
0
=
0 0
QQ0 QP0 (PP0 )1 PQ0
=
0
Cov {Y|YB = v} =
(9.52)
0
0
.
(9.53)
a = Cov {SX|PX = v}
Q( P0 (PP0 )1 P)Q0 0
=
0
0
0
0 1
0
Q( P (PP ) P)Q Q( P0 (PP0 )1 P)P0
=
P( P0 (PP0 )1 P)Q0 P( P0 (PP0 )1 P)P0
Q 0
Q
=
P0 (PP0 )1 P
.
P
P
(9.54)
as follows:
Since S is invertible we can pre- and post- multiply (9.53) by S1 and finally obtain:
Cov {X|PX = v} = P0 (PP0 )1 P ,
which is the expression of the conditional covariance that we were looking for.
(9.55)
E 293 Computations for the robust version of the leading example (www.9.5) *
Show that using the uncertainty set (9.108, AM 2005) in (9.105, AM 2005) the ensuing robust allocation
decision solves problem (9.111, AM 2005).
Solution of E 293
From (8.33, AM 2005), (8.25, AM 2005) and (8.29, AM 2005) we obtain:
argmin
s.t.
max
b p
0 1
A1 (10 1 )2 )
2 (
1
+wT (1 + A1 10 1 wT 2A
)
1
0
0
diag(pT )(1 + ) + 2
(9.56)
pT = wT
bp,
(1 )wT 0 diag(pT )(1 + ) + 20 0, for all
where:
A 10 1 1
erf
(9.57)
(2c 1)
(9.58)
diag(pT ) diag(pT ) ,
(9.59)
and:
bp
b )0 1 (
b)
| (
QN (p)
T
.
(9.60)
max
s.t.
cp
0 1
2A
(10 1 )2 + wAT (10 1 )
1
0
diag(pT ) + 2
0
)
(9.61)
0 pT = wT
bp,
0 diag(pT ) 20 wT , for all
or:
(
argmin
(
s.t.
max
cp
0 1
2A
(10 1 )2 + wAT (10 1 )
1
0
diag(pT ) + 2
0
0 pT = wn
T
o
20 0 diag(pT ) wT ,
max
cp
)
(9.62)
o
20 0 diag(pT ) wT .
(9.63)
cp
is a maximization constrained on an ellipsoid of contour surfaces that describe parallel hyperplanes. The
tangency condition is achieved when the gradients are parallel. For the gradient of the ellipsoid we have
1
b) .
g
(
cp
(9.64)
255
(9.65)
(9.66)
b p we have:
Since
QN (p)
b )0 1 (
b)
= (
T
b )0 1 1 (
b)
= (
(9.67)
= 2 0 diag(pT ) diag(pT ) .
Therefore:
s
=
QN (p)
1
.
T 0 diag(pT ) diag(pT )
(9.68)
QN (p)/T
diag(pT ) ,
0 diag(pT ) diag(pT )
(9.69)
where the choice of the sign follows from the maximization (9.63). Therefore the original problem (9.62)
reads:
(
argmin
(
s.t.
max
cp
wT 0 1
1
0 T +
1 0 diag(pT ) + 0
A
2
)
0 pT = wT q
N (p)/T
20 + Q
0 0 diag(pT )b
wT ,
0
(9.70)
where:
1 0 1
1
11 .
2
2A
(9.71)
Solution of E 294
Taking into account the elliptical/certain specifications (9.118, AM 2005)-(9.119, AM 2005), the robust
mean-variance problem (9.117, AM 2005) can be written as follows:
(
(i)
r
)
0
min { }
= argmax
C
b v (i) ,
0
s.t.
(9.72)
where:
b | ( m)0 T1 ( m) q 2 .
(9.73)
(9.74)
n
o
b | ( m)0 E1/2 1/2 E0 ( m) q 2 .
(9.75)
Then:
1 1/2 0
E ( m) ,
q
(9.76)
which implies:
m + qE1/2 u .
(9.77)
n
o
b m + qE1/2 u | u0 u 1 .
(9.78)
Then:
u u1
n
o
0 (m + qE1/2 u)
n
o
0 E1/2 u
u u1
D
E
1/2 0
= 0 m + q min
E
,
u
,
0
= 0 m + q min
0
(9.79)
u u1
where h, i denotes the standard scalar product (A.5, AM 2005). This scalar product reaches a minimum
when the vector u is opposite to the other term in the product:
257
1/2 E0
e
u
1/2 0
,
E
(9.80)
+
1/2 E0
= E ,
1/2 0
E
D
E
1
1/2 E0 , 1/2 E0
=
1/2 0
E
2
1
1/2 0
E
=
1/2 0
E
=
1/2 E0
.
1/2
(9.81)
C
s.t.
b v (i) .
0
(9.82)
0
0 T
(i)
=
argmax
C
s.t.
b v (i) .
0
(9.83)
E 295 Computations for the robust mean-variance problem II (www.9.6) (see E 294) *
Show that if the investment constraint are regular enough, the problem (9.130, AM 2005) can be cast in
the form of a second-order cone programming problem.
Solution of E 295
To put the problem (9.130, AM 2005) in the SOCP form (6.55, AM 2005) we introduce an auxiliary
variable z:
(i)
0
((i)
r , zr ) = argmax { m z}
,z
C
q
1/2 E0
z
s.t.
0b
v (i) .
(9.84)
Furthermore, considering the spectral decomposition (A.70, AM 2005) of the estimate of the covariance:
b F1/2 1/2 F0 ,
(9.85)
D
E
b = 1/2 F0 , 1/2 F0 .
0
(9.86)
we can write:
q
1/2 E0
z
s.t.
1/2 0 (i)
F v .
(9.87)
If the investment constraints C are regular enough, this problem is in the SOCP form (6.55, AM 2005).
(9.88)
and let us assume that C represents the full-budget constraint and the long-only constraints:
N
X
xn = 1
(9.89)
n=1
xn 0 ,
n = 1, . . . , N .
(9.90)
(9.91)
259
(9.92)
0
Dlo
D Db1
Db2
flo
f fb1 ;
fb2
Estimation error:
h
i
A01 q1/2 E0 |0N
B01 [00N |1]
(9.93)
d1 0
C1 0N ;
Variance:
h
i
A02 1/2 F0 |0N
B02 00N +1
d2 vi
(9.94)
C2 0N .
Then our problem (9.87) reads:
x = argmin {b0 x} ,
(9.95)
subject to:
D0 x + f 0
kA01 x + C1 k b01 x + d1
kA02 x
+ C2 k
b02 x
(9.96)
+ d2 .
Solution of E 297
Consider:
Z
fM (m)
f, (m)f, ()d
0
e 2 (m) (m) e 2 () ()
d
N p
N p
(2) 2 ||
(2) 2 ||
Z
1
(2)N
=p p
e 2 a d ,
|| ||
Z
(9.97)
where:
a (m )0 1 (m ) + ( )0 1 ( )
= m0 1 m + 0 1 2m0 1 + 0 1 + 0 1 20 1
0
= (
) 2 (
m+
) + m
m+
(9.98)
Defining:
b (1 + 1 )1 (1 m + 1 ) ,
(9.99)
we can write:
a = ( b)0 (1 + 1 )( b)
b0 (1 + 1 )b + m0 1 m + 0 1 .
(9.100)
= 2 e 2 c ,
where 2 is a normalization constant and:
(9.101)
261
c b0 (1 + 1 )bm0 1 m + 0 1
= (m0 1 + 0 1 )(1 + 1 )1 (1 m + 1 )
+ m0 1 m + 0 1
= (m + 1 )0 1 (1 + 1 )1 1 (m + 1 )
+ m0 1 m + 0 1
= m0 1 (1 + 1 )1 1 m 0 1 (1 + 1 )1 1
(9.102)
2m0 1 (1 + 1 )1 1 + m0 1 m + 0 1
= m0 1 1 (1 + 1 )1 1 m
2m0 1 (1 + 1 )1 1
0 1 (1 + 1 )1 1 + 0 1 .
Defining:
T 1 1 (1 + 1 )1 1
Tg 1 (1 + 1 )1 1
0
(9.103)
1 1
we obtain:
c = m0 Tm 2m0 Tg + h
= m0 Tm 2m0 Tg + G0 Tg G0 Tg + h
0
(9.104)
= (m G) T(m g) g Tg + h .
Since:
1 1 1
g = 1 1 (1 + 1 )1 1
( + 1 )1 1 ,
(9.105)
(9.106)
(9.107)
M N(g, T1 ) .
(9.108)
or in other words:
In our example:
.
T
(9.109)
Therefore:
1 1 1
g = 1 1 (1 + T 1 )1 1
( + T 1 )1 T 1
1
1
T
1
= 1 1
1+T
1+T
1+T
T
=
1
T
1+T
=,
(9.110)
and:
T = 1 1 (1 + T 1 )1 1
1
= 1 1
1+T
T
=
1 .
1+T
(9.111)
E 298 The robustness uncertainty set for the mean vector (www.9.8) *
Show that the optimization step in (9,139, AM 2005):
min {w0 }
(9.112)
where:
b
| ( 1 )0 1
1 ( 1 )
1 1
q2
T1 1 2
,
(9.113)
min
c
w 1
1 1
q2
T1 1 2
!
1/2
1/2 0
F w
,
(9.114)
263
where:
1 F1/2 1/2 F0 ,
(9.115)
where F is the juxtaposition (A.62, AM 2005) of the eigenvectors and is the diagonal matrix (A.65,
AM 2005) of the eigenvalues.
Solution of E 298
We can write (9.113) as follows:
b
1/2
| ( 1 ) F
1/2
1 1
F ( 1 )
q2
T1 1 2
0
.
(9.116)
1 1
q2
T1 1 2
1/2
1/2 F0 ( 1 ) ,
(9.117)
which implies:
= 1 +
1 1
q2
T1 1 2
1/2
F1/2 u ,
(9.118)
1 +
1 1
q2
T1 1 2
1/2
)
1/2
u|u u 1
(9.119)
Since:
*
+
1/2
1 1
1/2
2
w = w, 1 +
q
F u
T1 1 2
*
+
1/2
1 1
1/2 0
2
hw, 1 i
q
F w, u ,
T1 1 2
(9.120)
(9.121)
we have:
*
u u1
= w 1
1 1
T1 1 2
+
1/2
1 1
1/2 0
2
q
F w, u
T1 1 2
1/2
1/2 0
2
q
F w
.
(9.122)
E 299 The robustness uncertainty set for the covariance matrix (www.9.8) **
Show that the optimization step in (9,139, AM 2005):
max {w0 w} ,
(9.123)
where:
b
h
i0
h
i
b CE S1 vech
b CE q 2 ,
| vech
(9.124)
1
1 ,
1 + N + 1
(9.125)
with:
b CE =
and:
S =
212
1
(D0 (1 1
.
1 )DN )
(1 + N + 1)3 N 1
(9.126)
1
max {w w} =
+
+
N +1
c
1
2
212 q
(1 + N + 1)3
1/2 #
1/2 0
2
F w
.
(9.127)
Solution of E 299
Consider the spectral decomposition of the rescaled dispersion parameter (7.78):
1
(D0N (11 1
EE0 ,
1 )DN )
(9.128)
(9.129)
(9.130)
h
i0
h
i
b CE E1/2 1/2 E0 vech
b CE
vech
2
212 q
(1 + N + 1)3
.
(9.131)
u
2
212 q
(1 + N + 1)3
265
1/2
h
i
b CE ,
1/2 E0 vech
(9.132)
which implies:
h
i
b
vech [] vech CE +
2
212 q
(1 + N + 1)3
1/2
E1/2 u .
(9.133)
N (N +1)/2
b CE +
vech
2
212 q
(1 + N + 1)3
b CE +
vech
s=1
1/2
1/2
2
212 q
s
(1 + N + 1)3
u | u0 u 1
1/2
e(s) us | u0 u 1 .
(9.134)
Each eigenvector e(s) represents the non-redundant entries of a matrix. To consider all the elements we
simply multiply by the duplication matrix (A.113, AM 2005). Then from (9.133) we obtain:
w0 w = (w0 w0 ) vec []
= (w0 w0 )DN vech []
*
=
=
h
i
b CE +
w ) , vech
D0N (w0
0 0
2
212 q
(1 + N + 1)3
1/2
+
1/2
h
iE
b CE
D0N (w0 w0 )0 , vech
+
*
1/2
2
212 q
1/2
0
0
0 0
E u
+ DN (w w ) ,
(1 + N + 1)3
(9.135)
b CE w
= w0
1/2 D
E
2
212 q
+
1/2 E0 D0N (w0 w0 )0 , u .
3
(1 + N + 1)
Therefore:
b CE w
max {w0 w} = w0
c
+
0b
2
212 q
(1 + N + 1)3
1/2
D
E
max
1/2 E0 D0N (w0 w0 )0 , u
0
u u1
= w CE w
1/2
2
212 q
1/2 0 0
+
E DN (w0 w0 )0
.
3
(1 + N + 1)
(9.136)
1
w0 1 w
1 + N + 1
1/2
2
212 q
1/2 0 0
0
0
E
D
(w
w
)
+
.
N
(1 + N + 1)3
(9.137)
(9.138)
1
1
e0 ,
e N (1 1 )1 D
=D
(D0N (1
N
1
1
1 1 )DN )
(9.139)
e N = (w0 w0 ) ,
(w0 w0 )DN D
(9.140)
and:
see Magnus and Neudecker (1999). Now consider the square of the norm in (9.137). Using (9.139) and
(9.140) we obtain:
2
a
1/2 E0 D0N (w0 w0 )0
= (w0 w0 )DN E1/2 1/2 E0 D0N (w0 w0 )0
1
1 0
= (w0 w0 )DN (D0N (1
DN (w0 w0 )0
1 1 )DN )
e N (1 1 )D
e 0 D0 (w0 w0 )0
= (w0 w0 )DN D
N N
h
i0
0
0
e
e N)
= (w w )DN DN (1 1 ) (w0 w0 )(DN D
(9.141)
(9.142)
= (w 1 w) .
Therefore (9.137) yields:
1
w0 1 w
1 + N + 1
1/2
2
212 q
+
(w0 1 w)
(1 + N + 1)3
"
1/2 #
2
1
212 q
=
+
(w0 1 w) .
1 + N + 1
(1 + N + 1)3
max w0 w =
c
(9.143)
267
1
max {w0 w} =
+
+
N +1
c
1
2
212 q
(1 + N + 1)3
1/2 #
1/2 0
2
F w
.
(9.144)
2
1
1 q
T1 1 2
(i)
1
1 +N +1
!1/2
(9.145)
v (i)
2
212 q
(1 +N +1)3
1/2 ,
(9.146)
(9.147)
(9.148)
wC,z
subject to:
1/2 0
F w
z/
q
1/2 0
(i)
F w
.
This problem is in the SOCP form (6.55, AM 2005).
(9.149)
(9.150)
Compute the robust mean-variance efficient frontier in terms of relative weights, assuming the
standard long-only and full investment constraints;
Plot the efficient frontier in the plane of weights and standard deviation.
Hint. Use the CVX package available at http://cvxr.com/cvx/.
Solution of E 301
R
See the MATLAB
script S_MeanVarianceCallsRobust.
(9.151)
Compute the inputs of the mean-variance approach, namely expectations and covariances of the linear
returns.
Solution of E 302
Also from (2.219, AM 2005)-(2.220, AM 2005) for a log-normal variable:
Y LogN(, ) .
(9.152)
Then:
1
E{Y} = e+ 2 diag()
1
(9.153)
(9.154)
(9.155)
(9.156)
we can easily compute the expectations E{R} and the second moments E{RR0 }. The covariance then
follows from:
Cov{R} = E{RR0 } E{R} E{R0 } .
(9.157)
269
Construct a pick matrix that sets views on the spread between the compounded return of the first
and the last security;
Set a one-standard deviation bullish view on that spread;
Use the market-based Black-Litterman formula (9.44, AM 2005) to compute the normal parameters
that reflect those views;
Map the results into expectations and covariances for the linear returns;
Compute and plot the efficient frontier under the same constraints as above.
Solution of E 303
R
See the MATLAB
script S_BlackLittermanBasic.
X = BZ1 + (1 B)Z1 ,
(9.158)
where:
Z1 N(1, 1) ,
Z1 N(1, 1) ,
(9.159)
R
script
B is Bernoulli with P{B = 1} 1/2, and all the variables are independent. Write a MATLAB
in which you compute and plot the posterior market distribution that is the most consistent with the view:
e
E{X}
0.5 .
(9.160)
R
Hint. Use the MATLAB
package Entropy Pooling available at www.mathworks.com/matlabcentral/
fileexchange/21307.
Solution of E 304
R
script S_EntropyView.
See the MATLAB
Bibliography
Abramowitz, M., Stegun, I. A., 1974. Handbook of Mathematical Functions with Formulas, Graphs, and
Mathematical Tables. Dover.
Albanese, C., Jackson, K., Wiberg, P., 2004. A new Fourier transform algorithm for value at risk. Quantitative Finance 4 (3), 328338.
Bertsimas, D., Lauprete, G. J., Samarov, A., 2004. Shortfall as a risk measure: Properties, optimization
and applications. Journal of Economic Dynamics and Control 28, 13531381.
Black, F., Scholes, M. S., 1973. The pricing of options and corporate liabilities. Journal of Political
Economy 81, 637654.
Chib, S., Greenberg, E., 1995. Understanding the Metropolis-Hastings algorithm. The American Statistician 49, 327335.
Dickey, J. M., 1967. Matrix-variate generalizations of the multivariate t distribution and the inverted
multivariate t distribution. Annals of Mathematical Statistics 38, 511518.
Embrechts, P., Klueppelberg, C., Mikosch, T., 1997. Modelling Extremal Events. Springer.
Fang, K. T., Kotz, S., Ng, K. W., 1990. Symmetric Multivariate and Related Distributions. CRC Press.
Feuerverger, A., Wong, A. C., 2000. Computation of value at risk for nonlinear portfolios. Journal of Risk
3, 3755.
Gourieroux, C., Laurent, J. P., Scaillet, O., 2000. Sensitivity analysis of values at risk. Journal of Empirical Finance 7, 225245.
Kendall, M., Stuart, A., 1969. The Advanced Theory of Statistics, Volume, 3rd Edition. Griffin.
Ledoit, O., Santa-Clara, P., Wolf, M., 2003. Flexible multivariate GARCH modeling with an application
to international stock markets. Review of Economics and Statistics 85, 735747.
Magnus, J. R., Neudecker, H., 1999. Matrix Differential Calculus with Applications in Statistics and
Econometrics, Revised Edition. Wiley.
Merton, R. C., 1976. Option pricing when underlying stocks are discontinuous. Journal of Financial
Economics 3, 125144.
Meucci, A., 2005. Risk and Asset Allocation. Springer.
URL http://symmys.com
270
BIBLIOGRAPHY
271
Meucci, A., 2009. Review of statistical arbitrage, cointegration, and multivariate Ornstein-Uhlenbeck.
Working Paper.
URL http://symmys.com/node/132
Meucci, A., 2010a. Annualization and general projection of skweness, kurtosis, and all summary statistics. GARP Risk Professional August, 5556.
URL http://symmys.com/node/136
Meucci, A., 2010b. Common misconceptions about beta - hedging, estimation and horizon effects.
GARP Risk Professional June, 4245.
URL http://symmys.com/node/165
Meucci, A., 2010c. Factors on Demand - building a platform for portfolio managers risk managers and
traders. Risk 23 (7), 8489.
URL http://symmys.com/node/164
Meucci, A., 2010d. Review of dynamic allocation strategies: Convex versus concave management. Working Paper.
URL http://symmys.com/node/153
Meucci, A., 2010e. Review of linear factor models: Unexpected common features and the systematicplus-idiosyncratic myth. Working paper.
URL http://www.symmys.com/node/336
Rau-Bredow, H., 2004. Value at risk, expected shortfall, and marginal risk contribution. In: Szego, G.
(Ed.), Risk Measures for the 21st Century. Wiley, pp. 6168.
Rudin, W., 1976. Principles of Mathematical Analysis, 3rd Edition. McGraw-Hill.