Académique Documents
Professionnel Documents
Culture Documents
Question 1:
Given that there are n number of random variables, the probability
P (X x) = P (min(X1 .....Xn ) x) suggests that there is at
least one Xi that is smaller than x. Also, note that the probability that at least ONE Xi is smaller than x is equivalent to
1 P (Xi x).
Since Xi is identically distributed, the probability of all Xi is
greater than x is simply (1 FX (x))n .
Hence the probability that at least ONE Xi is smaller than x is
then
1 (1 FX (x))n .
b,
Rx
FX (x) = 0 fX (t) dt
substituting the given density function above yield:
Z
FX (x) =
fX (t) dt
0Z
1
=
t t
e dt
Z x
t
t x
1
=
e dt
te
0
0
i
t x
1 x
=
xe e
0
x
x
1
=
xe + e +
x
x
+1
=1e
c,
As the lights are connected are in series, if one breaks, all of the
light bulbs will also break. So to find the expected life time of
all the bulbs, we just need to find the time taken for the first
light bulb fails. Thus taking what we have found in part (a)
we let T = min (X1 , ...., Xn )
Then
n
x
x
FT (x) = 1 1 1 e 1 +
n
x
x
=1 e
+1
d,
Let Y = nT .
Therefore:
FY (y) = P (Y y)
Fy (y) = 1 e n 1 +
n
So as we take n
lim FY (y) = lim 1 e
= 1 lim e
ln(e
y
1+
n
n
n (1+ y ))n
n
+ n ln 1 +
= lim n
lim ln e n 1 +
n
n
n
n
n
From here, by discussing with a fellow peer S.zhu, he pointed
out that I should consider the Taylor expansion of ln(1 + x)
y
lim ln 1 +
n
n
y
=
n
1 y
2 n
2
1
+
3
3
3
.....
Hence
lim n
y
y
+
n
n
1 y
2 n
2
1
+
3
!
.....
= lim
y 2
=
2 2
So limn 1 e
1+
y
n
n
=1e
y 2
2 2
e,
y 2
2 2
2
t2 t
2
2
E(Y ) =
e dt
2
0
Z 2 2
t t
2
=
e 22 dt
2
1 y 2 1 y 3
......
+
2 2 3 3 n
R 2 t2
Also, t 2 e 22 dt is simply the second moment of the normal
distribution. According to https://mazeofamazement.wordpress.com/2010/07/03/littlebit-more-gaussian/, the second moment can be calculated by
E[X] = 2 + 2
Z
2
2 t2 t
2
2
E(Y ) =
e dt(since the integrand is an even f unction)
2 2
2 2
(0 + 2 )
=
2
2
=
Question 2a,
Using
the fact that
R z p1
du = z p
0 pu
p
z p fZ (u)du
Z0 Z z
=
pup1 fZ (u)dudz
E(z ) =
pu
0
p1
fZ (u)dzdu = p
p1
u
Zo
=p
h
i
FZ (u)
u
up1 (1 FZ (u))du
as required
b,
l(m) is defined as E(|X m|). By considering P=1
Z
l(m) =
(1 FH (u))du
0
Z
=
1 P (|X m| u)du
0
Z
=
1 P (u X m u)du
Z0
=
1 P (m u X m + u)du
o
Z
=
1 F (m + u) + F (m u)du
0
1 F (m + u) + F (m u)du
0
d(l(m)
=
dm
Z0
(1 F (m + u) + F (m u)du)
m
f (m u) f (m + u)du
= F (m + u) F (m u)
= F (m + ) F (m ) (F (m + 0) F (m 0))
= 1 + 2F (m).
d(l(m)
dm ,
we set it to
1 + 2F (m) = 0
1
F (m) =
2
1
m = F 1 ( )
2
Question 3a
To find the maximum Likelihood (mle) of the of we consider
using the natural logarithm function. This will allow the differentiation easier;
lnL(k, |x) =
n
X
i=1
k
ln
x+1
i
= nln() + nln(k) ( + 1)
n
X
ln(xi )
i=1
X
n
dln(L())
= + nln(k)
d
i=1
n X
=
ln(xi ) nln(k)
i=1
Introducing
and k
1
= Pn
n
i=1 ln(xi ) nln(k)
n
i=1 ln(xi ) nln(k)
n
= Pn
xi
i=1 ln( k )
= Pn
b,
To show that k follows a Pareto distribution, we can find the
From Question 1 (a),
cumulative distribution function of k.
Fk(x)
= P (min(xi ) X)
= 1 (1 FX i (x))n
n
nk n
Bias =
x n+1 dx k
x
k
Z
1
= nk n
dx k
xn
k
nk n h 1 i
=
k
n 1 xn1 k
nk n h 1 i
=
k
n1 k n1
nk
=
k
n 1
1
=k
n 1
Z
k =0
E(k)
=k
E(k)
nk
= k.
n 1
To make the left hand side to equal to k, we need to multiply
by an unbiased estimator. In this case it is the constant
E(k)
n1
n .
d,
Let H = min(X1 , X2 , .....Xn )
By question 1(a), it is clear to see that:
FH (x) = 1 (1 FX (x))n
n
k
=1 1 1
x
n
k
=1
x
k n
= 1 n
x
|Xn X|
1 + |Xn X|
|Xn X|
n
|Xn X| 1 + |Xn X|
|Xn X|
+ lim E I
n
|Xn X| 1 + |Xn X|
= lim E I
Now, limn E I
|Xn X|
|Xn X|
1+|Xn X|
is limn E I
|Xn X|
1+|Xn X|
as it is bounded by .
Also, limn E I
as
x
1+x
|Xn X|
|Xn X|
1+|Xn X|
|Xn X|
1.
lim E
|Xn X|
1 + |Xn X|
1
+ lim E I
n
n
1
+
|X
X|
|Xn X|
|Xn X|
n
lim E I
+ lim P (|Xn X| )
= lim E I
|Xn X|
As is positive and real and our limit is less than it, we can
|Xn X|
=0
conclude that limn E 1+|X
n X|
part(b),
Using the hint provided and a late night discussion with L.Wright,
x
the function f (x) = 1+x
is increasing. So;
|Xn X|
1 + 1 + |Xn X|
|Xn X|
E I|Xn X|>
E
1+
1 + |Xn X|
P (|Xn X| > ) lim E I|Xn X|>
lim
n 1 +
n
1+
= 0(by part(a))
I|Xn X|>
As 1+
is simply a constant, we can conclude that P (|Xn X| > ) =
0. This is the definition of convergence in probability,
p
i.e Xn X
Bibliography
NA. 2010. Little bit more Gaussian. [ONLINE] Available at:
https://mazeofamazement.wordpress.com/2010/07/03/little-bitmore-gaussian/. [Accessed 22 May 2016].