Vous êtes sur la page 1sur 12

University of New South Wales

MATH 2901 Assignment

Question 1:
Given that there are n number of random variables, the probability
P (X x) = P (min(X1 .....Xn ) x) suggests that there is at
least one Xi that is smaller than x. Also, note that the probability that at least ONE Xi is smaller than x is equivalent to
1 P (Xi x).
Since Xi is identically distributed, the probability of all Xi is
greater than x is simply (1 FX (x))n .
Hence the probability that at least ONE Xi is smaller than x is
then
1 (1 FX (x))n .
b,
Rx
FX (x) = 0 fX (t) dt
substituting the given density function above yield:
Z

FX (x) =

fX (t) dt
0Z

1
=

t t
e dt

using integration by parts, we get



Z x

t
t x
1
=
e dt
te
0

0
i 

t x
1  x

=
xe e
0




x
x
1

=
xe + e +



x
x
+1
=1e

c,
As the lights are connected are in series, if one breaks, all of the
light bulbs will also break. So to find the expected life time of
all the bulbs, we just need to find the time taken for the first
light bulb fails. Thus taking what we have found in part (a)
we let T = min (X1 , ...., Xn )
Then


n 

x
x
FT (x) = 1 1 1 e 1 +



n
x
x
=1 e
+1

d,

Let Y = nT .
Therefore:
FY (y) = P (Y y)

= P ( nT y)(using the subsitituion)


y
= P (T )
n


n
y
y

Fy (y) = 1 e n 1 +
n

So as we take n


lim FY (y) = lim 1 e

= 1 lim e

ln(e

y
1+
n

n

n (1+ y ))n
n

by evaulating inside the limit of the ln f unction




n




y
y
y
y

+ n ln 1 +
= lim n
lim ln e n 1 +
n
n
n
n
n
From here, by discussing with a fellow peer S.zhu, he pointed
out that I should consider the Taylor expansion of ln(1 + x)


y
lim ln 1 +
n
n

y
=
n

1 y

2 n

2

1
+
3

3

3
.....

Hence

lim n

y
y
+
n
n

1 y

2 n

2

1
+
3

!
.....


= lim

y 2
=
2 2


So limn 1 e

1+

y
n

n

=1e

y 2
2 2

e,
y 2
2 2

From part(d), we have shown that FY (y) = 1 e


To compute
the expected value, we need the density function,
y 2

so fY (y) = y2 e 22 The density function resembles the Normal


distribution, so:

2
t2 t
2
2
E(Y ) =
e dt
2
0
Z 2 2
t t
2
=
e 22 dt
2


1 y 2 1 y 3
......
+
2 2 3 3 n

R 2 t2
Also, t 2 e 22 dt is simply the second moment of the normal
distribution. According to https://mazeofamazement.wordpress.com/2010/07/03/littlebit-more-gaussian/, the second moment can be calculated by
E[X] = 2 + 2

Z
2
2 t2 t
2
2
E(Y ) =
e dt(since the integrand is an even f unction)
2 2

2 2
(0 + 2 )
=
2

2
=

2. The answer is so much lower


Hence E(T ) = E(Y
=
250
n
when n=100 as we expect the first light bulb in the circuit to
fail is far less than the life of a single light bulb.

Question 2a,
Using
the fact that
R z p1
du = z p
0 pu
p

z p fZ (u)du
Z0 Z z
=
pup1 fZ (u)dudz

E(z ) =

by reversing the order of integration and taking horizontal strips,

pu
0

p1

fZ (u)dzdu = p

p1

u
Zo

=p

h
i
FZ (u)
u

up1 (1 FZ (u))du

as required

b,
l(m) is defined as E(|X m|). By considering P=1
Z
l(m) =
(1 FH (u))du
0
Z
=
1 P (|X m| u)du
0
Z
=
1 P (u X m u)du
Z0
=
1 P (m u X m + u)du
o
Z
=
1 F (m + u) + F (m u)du
0

Now, by differentiating both sides and using Liebniz Rule, we


get the following expression
d
d(l(m)
=
dm
dm

1 F (m + u) + F (m u)du
0

We now apply Liebniz Rule here:

d(l(m)
=
dm

Z0

(1 F (m + u) + F (m u)du)
m

f (m u) f (m + u)du


= F (m + u) F (m u)

= F (m + ) F (m ) (F (m + 0) F (m 0))
= 1 + 2F (m).

Now to find to find the stationary points of


equal zero. Therefore, we get the following

d(l(m)
dm ,

we set it to

1 + 2F (m) = 0
1
F (m) =
2
1
m = F 1 ( )
2

As l(m) is increasing, the stationary point we have found is a


minimum. Hence m = F 1 12 occurs at the median of the distribution.

Question 3a
To find the maximum Likelihood (mle) of the of we consider
using the natural logarithm function. This will allow the differentiation easier;

lnL(k, |x) =

n
X
i=1

k
ln
x+1
i


= nln() + nln(k) ( + 1)

n
X

ln(xi )

i=1

Differentiating the equation above, we get the following:


n

X
n
dln(L())
= + nln(k)
d

i=1

Setting the derivative to equal zero, and re-arranging we get:


n

n X
=
ln(xi ) nln(k)

i=1

Introducing
and k

1
= Pn
n
i=1 ln(xi ) nln(k)
n
i=1 ln(xi ) nln(k)
n
= Pn
xi
i=1 ln( k )

= Pn

b,
To show that k follows a Pareto distribution, we can find the
From Question 1 (a),
cumulative distribution function of k.

Fk(x)
= P (min(xi ) X)

= 1 (1 FX i (x))n

By integrating the density function given within the question,


we reach:
k
Fk (x) = 1 1 1
x
n
k
= 1 n
x


n

Thus this follows a Pareto distribution with parameters of n, k


c,

The bias of k can be calculated from E(k)k.


So if we calculate:

nk n
Bias =
x n+1 dx k
x
k
Z
1
= nk n
dx k
xn
k
nk n h 1 i
=
k
n 1 xn1 k
nk n h 1 i
=
k
n1 k n1
nk
=
k
n 1
1
=k
n 1
Z

To find the unbiased estimator, we simply just need to do the


following calculation:

k =0
E(k)
=k
E(k)
nk
= k.
n 1
To make the left hand side to equal to k, we need to multiply
by an unbiased estimator. In this case it is the constant
E(k)
n1
n .
d,
Let H = min(X1 , X2 , .....Xn )
By question 1(a), it is clear to see that:
FH (x) = 1 (1 FX (x))n
n


k
=1 1 1
x
 n
k
=1
x
k n
= 1 n
x

Hence, it follows the Pareto Distribution with parameters n, k


Question 4a
From watching the 19/05/2016 MATH 2901 lecture video,

lim E

|Xn X|
1 + |Xn X|

|Xn X|
n
|Xn X| 1 + |Xn X|

|Xn X|
+ lim E I
n
|Xn X| 1 + |Xn X|

= lim E I

Now, limn E I

|Xn X|

|Xn X|
1+|Xn X|

is  limn E I

|Xn X|

1+|Xn X|

as it is bounded by .

Also, limn E I
as

x
1+x

|Xn X|

|Xn X|
1+|Xn X|

can be re-written as limn E I

|Xn X|

1.

So now we can write:


lim E

|Xn X|
1 + |Xn X|

1
+ lim E I

n
n
1
+
|X

X|
|Xn X|
|Xn X|
n


lim E I
+ lim P (|Xn X| )
=  lim E I

|Xn X|

As  is positive and real and our limit is less than it, we can
|Xn X|
=0
conclude that limn E 1+|X
n X|
part(b),
Using the hint provided and a late night discussion with L.Wright,
x
the function f (x) = 1+x
is increasing. So;

|Xn X|

1 +  1 + |Xn X|





|Xn X|
E I|Xn X|>
E
1+
1 + |Xn X|




P (|Xn X| > ) lim E I|Xn X|>
lim
n 1 + 
n
1+
= 0(by part(a))
I|Xn X|>


As 1+
is simply a constant, we can conclude that P (|Xn X| > ) =
0. This is the definition of convergence in probability,
p
i.e Xn X

Bibliography
NA. 2010. Little bit more Gaussian. [ONLINE] Available at:
https://mazeofamazement.wordpress.com/2010/07/03/little-bitmore-gaussian/. [Accessed 22 May 2016].

Joseph Lee Petersen. 2012. Estimating the Parameters of a


Pareto Distribution. [ONLINE] Available at: http://citeseerx.ist.psu.edu/viewdoc/download?
[Accessed 22 May 2016].

Vous aimerez peut-être aussi