Vous êtes sur la page 1sur 4

Probability

of an event:
P(A)=number of outcomes favorable to A/total number of outcomes; 0P(A)1;
P(A+B)=P(A)+P(B)-P(AB)
A A P(A+A)=1 P(AA)=0
AB P(AB)=P(A)P(BA)=P(B)P(AB) P(AB)A, B
AB P(AB)=P(A)P(B)
If A must result in one of the mutually exclusive events A1, A2, A3,, An:
P(A)=P(A1)P(AA1)+P(A2)P(AA2)++P(An)P(AAn)
Bayes Theorem: P(B)P(AB)=P(A)P(BA) P(AB)=P(BA)P(A)P(B)
P(B)=P(BA)P(A)+P(BA)P(A)
Probability Mass Function: f(X=xi)=0, xxi; 0f(xi)1; ! ! = 1
!
Probability Density Function: P(x1<X<x2)= ! ! ()
!

Cumulative Distribution Function: P(aXb)=F(b)-F(a); P(xk)=1-F(k)


1st Moment: Expected Value E(X)=x1P(x1)+x2P(x2)++xnP(xn)
`ab is a constant E(b)=b; E(bX)=bE(X); E(aX+b)=aE(X)+E(b)=aE(X)+b;
`E(X+Y)=E(x)+E(Y);
`X and Y are independent, then E(XY)=E(X)E(Y);
2nd Moment: Variance Var(X)=E(X-x)2=!! =E(X2)-[E(X)]2;
`b is a constant: Var(b)=0; Var(bX)=b2Var(X); Var(X+b)=Var(X);
`Coefficient of Variance: CV=x/x100
`X and Y independent: Var(XY)=Var(X)+Var(Y);
Var(aX+bY)=a2Var(X)+b2Var(Y)
`X and Y Not independent: Var(aX+bY)= a2Var(X)+b2Var(Y)+2abCov(X,Y)
`Cov(X,Y)=E[(X-E(X))(Y-E(Y))]=E(XY)-E(X)E(Y)=xyxy;
Cov(a+bX, c+dY)=bdCov(X,Y)
Chebyshevs Inequality: k
!
1 ! ,k>1
!(!!!! )!

3rd Moment: Skewness S=

Negative-Skewed
S<0
Mean<Median<Mode

!!!

Symmetric
S=0
Mean=Median=Mode

Positive-Skewed
S>0
Mode<Median<Mean

!(!!!)!

4th Moment: Kurtosis K= !!



Leptokurtic
Mesokurtic
K
>3
=0
Excess K
>0
=0
Tails
Fat Tail
Normal

Probability Distributions:
Binomial
!! ! (1 )!!! E(y)=p Var(x)=p(1-p)
E(x)=np Var(x)=np(1-p)

k
-
Poisson
P(x=k)= e /k!
K: the number of success per unit;
:the average of expected number of

Platykurtic
<3
<0
Thin Tail

(n+1)p-
1<x<(n+1)p
`E(x)=;
`When n is large, p
is small, =np;
`, Normal

Uniform

PDF: f(x)=!!! , xb; =o, otherwise;


CDF: F(x)=0, for xa;
!!!
=!!!, for a<xb;
=1, for x>b

E(x)=(a+b)/2
Var(x)=(b-a)2/12

Normal

If x ~N(, 2)then z=(x-)/~N(0,1)


If X~N(x, x2), Y~N(Y, Y2)
(K1X+K2Y) ~N(K1x+K2Y, K12x2+K22Y2 +2 K1 K2 x2Y2)

Central
Limit T

!!

Sample: mean , variance !! , (n30)


!
~(0,1)
! /
Standard Error of mean : ! / ()
Populations x almost unknown, so use the !! of the sample
mean instead of x:
1
!! =
(! )!
1
Lognormal If InX is normal, X is lognormal,
Chi-
E(X)=k, V(x)=2k
( 1) !
~ ! ( 1)
!!
T-
!
x=0
=
~(!!!)
Var(x)=n/(n-2)
! /
Fat tails
!
!
F
! /!
~0
= ! ! ~(!!!,!!!)
! /!
Confidence Interval: point estimate Reliability factor Standard error
!
Population has a normal distribution with a known Varz !/! !;

Population has a normal distribution with a unknown Var ~ t(n-1);


Hypothesis Testing Step:
Step 1: Assumption
Eg: H0: =170 ( H0 )
H1: 170 one-tail or two-tails?
Step 2: Test Statistic (-)/!!!"#;
! =

!
!,

unknown ! =
(!!!)! !

H0: ! = !! X2=

!!!

!
!

!!

~(n-1);

!!

H0: !! = !! , = !!! ~(! 1, ! 1)


Step 3: z? t? X2?
Step 4: Reject region

z; t (n-1); 30, z; 2, X2
significant degree?
one-tail or two-tails ?
Critical Value
X2 table
Step 5: Judge
Test statistic belong to Reject Region, Reject H0;
P-value<, reject H0P-value>, not reject H0;

Type I error: ! ! =; Type II error: ! ! ; Power of test=P ! ! =1-


Type II error; so, when Type I error, Type II error, Power of test

Regression with a Single Regressor


OLS: minimize the squared residuals
y=b0+b1X+;
Assumption: A linear relationship X is uncorrelated with ;
E ! = 0; Var() is a constant; is uncorrelated across observations;
Result: the OLS estimators are Best Linear unbiased Estimators (BLUE):
minimum variance/linear/sample mean=mean;
!
!"#(!,!)
! = !"#(!) = !,! !! ! = ! ;
!

Measures of fit:
!""
!!"
The coefficient of determination R2=2=!"" = 1 !"" ~01;
The standard error of the regression (SER) is an estimator of the standard
deviation of the regression error ui. SER=

!!"
!!!

ESS: SSR or SSE: ;


!!!
Adjusted R2=1(1-R2)!!!!!; k:

Degree of Freedom Sum of
Mean Sum of
(df)
Squariance (SS)
Squariance (MSS)
Regression K=1
ESS
ESS/k
Residual
n-2
RSS
RSS/(n-2)
Total
n-1
TSS

Regression with Multiple Regressors

SER=

!!"
!!!!!

F-test: how well the set of independent variables, as a group, explains the
variation in the independent variable
!""/!
F=!!"/(!!!!!) ~(, 1) (always an one-tailed test)

df
SS
MSS
Regression
k
ESS
ESS/k
Residual
n-k-1
RSS
RSS/(n-k-1)
Total
n-1
TSS

Monte Carlo
Geometric Brownian Motion (GBM) Model: ! = ! ( + )
St=asset price; ~ N(0,1); ~lognormal;
St=infinitesimally small price changes; t=
Probability Distribution Selection:
1. 2. Parameter Estimate 3. Best Fit 4. Subjective Guess;
The Bootstrap: !!! = ! (1 + !(!) )
Pro: include fat-tails, jumps or any departure from the normal distribution;
Accounts for correlations across series;
Con: small smaple sizes; relies heavily on the assumption.
Random Number Generation:
Midsquare: 53812=28955161middle four digits=9551, random No.=0.9551

Congruential: ! = !!!


!"!!!
!

Estimate Volatilities:
!
!
EWMA: ! = !!!
+ (1 )!!!
;
!
!!!!
!
!!! = ! = ! ()
!!!

!!!

!
K !!!
1 !!!
!
!
GARCH(1,1): !! = ! + !!!
+ !!!
;

++=1
Estimate Correlation:
!"#
!" =! !!
!,! !,!

!
!
EWMA: COVn=COV n-1+(1-)X n-1Y n-1; !! = !!!
+ (1 )!!!
;
!
!
!
GARCH(1,1): COVn=+X n-1Y n-1+COV n-1 ; ! = + !!! + !!! .

Vous aimerez peut-être aussi