Vous êtes sur la page 1sur 4

MAST20004 Probability - 2014

Assignment 4 - Solutions

1. (a) The MGF of X 2 is given by




Z
Z
2
1
1
1
2
2
2
2
x
2t
etx ex /(2 ) dx =
e 2 2
dx.
M (t) = E[etX ] =
2
2
This integral will converge as long as
t<

1
2 2

in which case it is equal to


1/2
M (t) = 1 2 2 t
.
Note that this is the moment generating function of a gamma distribution with
rate parameter 1/2 and scale parameter 1/(2 2 ).
(b) Since the MGF of a sum of independent random variables is the product of their
MGFs, the MGF of Z = X1 + + Xn is
MZ (t) = 1 2 2 t

n/2

with domain the same as in part (a).


(c) The MGF of the gamma distribution with rate parameter r and scale parameter
is
(1 t/)r ,
(defined for t < 1/) which agrees with MZ (t) if = 1/(2 2 ) and r = n/2, and
so these distributions must be the same.
2. Let N be the number of customers on a given day and X1 , X2 , . . . be the amounts that
the customers spendPas they arrive (i.e., Xi is the amount the ith paying customer
spends). Then Z = N
i=1 Xi is the total revenue of the coffee cart on that day.
(a) We want E[Z] and were assuming N is independent of the Xi s and so as in the
lecture notes:
E[Z] = E{E[Z|N ]} = E{N E[X1 ]} = E[N ]E[X1 ] = $5 400 = $2000
is the average revenue of the coffee cart per day.
(b) Using the conditional formula for variance and independence between the variables:
V (Z) = V (E[Z|N ]) + E[V (Z|N )] = E[X1 ]2 V (N ) + V (X1 )E[N ] = 4100,
and the standard deviation is the square root of this variance which is about $64.
3. Note that E[Y |X] = X and V (Y |X) = X.
(a) E[Y ] = E{E[Y |X]} = E[X] = 1.
1

(b) V (Y ) = V (E[Y |X]) + E[V (Y |X)] = V (X) + E[X] = 2.


(c) Using that the PGF of a Poisson mean distribution is GP n (z) = e(1z) , we
have that the PGF of Y |X is
E[z Y |X] = e(1z)X ,
and so the generating function of Y is
E[z Y ] = E{E[z Y |X]} = E{e(1z)X } = MX ((1 z)) = (1 + (1 z))1 ;
here MX is the moment generating function of X and we used the formula for
the MGF of a gamma variable above (exponentials are gammas, too, and note
that the expression holds for z < 2). This is the PGF of a geometric distribution
started from zero with parameter 1/2.
R
4. (a) E[X a1 ] = 0 xa1 ex dx = (a), using the definition of the gamma function.
(b) This is the calculation on Slide 254 of the lecture notes.
(c) Since E[X] = 1, we approximate

= E[(X)] (1) + 00 (1)V (X)/2 = (1) + 00 (1)2 = 1.75;


since V (X) = 1, (1) = 1, and 00 (1) = 3/4. So our approximation is
(1.75)2 3.04, which isnt that great. The approximation could be made better
by taking more terms of the Taylors series.
(d) Similar to above, the approximation is
V ((X)) 0 (1)2 V (X) 1/2.
Computing exactly:
2

E[(X) ] = E[X

Z
]=

x1 ex dx = .

So the variance isnt even finite and the approximation is misleading to say the
least!
5. (a) For 0 < z < 1,
Z

P (U + V z) =
0

zy

2(1 y)dxdy = z 2 z 3 /3.

For 1 < z < 2,


Z

z1 Z 1

P (U + V z) =

zy

2(1 y)dxdy +
0

2(1 y)dxdy
z1

= 5/3 + 4z 2z 2 + z 3 /3.
Differentiating we obtain the pdf:
(
2z z 2 ,
0 < z < 1,
f (z) =
4 4z + z 2 , 1 < z < 2,
which integrates to one since according to the expressions above P (U + V 0) =
0 and P (U + V 2) = 1. A sketch of the plot of the pdf:
2

(b) E[U + V ] = E[U ] + E[V ] = 1/2 + 1/3 = 5/6.


(c) By independence: Var(U + V ) = Var(U ) + Var(V ) = 1/12 + 1/18 = 5/36.
6. (a) Note that the fact can be used to find the pdf of (Z1 , Z2 ) since we know the pdf
of (X, Y ) is f (x, y) = (2)1 exp{(x2 + y 2 )/2} and




Z1
X
,
=A
Z2
Y
where


A=

has det(A) =

1 p 0

1 2

p
1 2 and
A


=

p1
p0
2
/ 1 1/ 1 2

So that
1

z1
z2

"


=

z1


.

z
1 +z2
12

and plugging this in to the formula of the fact yields that the pdf of (Z1 , Z2 )
equals



1
1
(z1 + z2 )2
exp
z12 +
g(z1 , z2 ) = p
2
1 2
2 1 2
 2

z1 z1 z2 + z22
1
= p
exp
,
2(1 2 )
2 1 2
and since this is the pdf of a standard bivariate normal, (Z1 , Z2 ) N2 ().
(b) The bilinearity of covariance implies that
p
p
Cov(Z1 , Z2 ) = Cov(X, X + 1 2 Y ) = Cov(X, X) + 1 2 Cov(X, Y ).
Since Cov(X, X) = V (X) = 1 and independence implies Cov(X, Y ) = 0, the
result follows.
3

(c) Since Z1 = X were conditioning on X = z. Since Y is independent of X, the


distribution of Y is unaffected by this conditioning. Thus Z2 |Z1 = z has the
same distribution as
p
z + 1 2 Y,
where Y is standard normal. A linear function of a normal random variable is
still normal and the parameters are most easily read by computing the mean and
variance:
p
p
E[z + 1 2 Y ] = z + 1 2 E[Y ] = z,
p
V [z + 1 2 Y ] = (1 2 )V (Y ) = 1 2 ,
as desired.

Vous aimerez peut-être aussi