Vous êtes sur la page 1sur 20

4-13【Question】

If X is a nonnegative continuous random variable, show that



E( X )   0
[1  F ( x )]dx

Apply this result to find the mean of the exponential distribution.


【Solution】
X is nonnegative continue r.v

(i) E ( X )  0 xf ( x ) dx

b
= lim  xf ( x)dx
b  0


( x ( F ( x )  1)) 0   ( F ( x)  1) dx
b
= lim
b  0


= lim b( F (b)  1)   (1  F ( x ))dx
b  0

claim: lim b( F (b)  1)  0


b

 
consider: 0  b(1  F (b))  b b f ( x )dx  
b
xf ( x ) dx

0  lim b(1  F (b))  0


b 


 E( X )   0
(1  F ( x ))dx

(ii) x ~ exp( ) F ( x)  1  e  x
 1
E( X )  
0
e x dx 

4-18【Question】
If U 1 ,  , U n are independent uniform random variables, find E (U ( n )  U (1) ) .
【Solution】
U 1 ,  , U n ~ uniform(0,1)
n!
f U ( n ) ,U (1) (u n , u1 )   1  1  (u n  u1 ) n  2
0!(n  2)!0!
= n( n  1)(u n  u1 ) n  2

R U(n) U(1) U(1)  Y


 
0 1
J  1

Y U(1) U(n)  R Y


1 1

 f R ,Y ( r , y )  n( n  1) r n  2  1 , 0  y  r  y  1
1 r
f R (r )   0
n( n  1)r n  2 dy  n(n  1)(1  r )r n  2 , 0  r  1

1
E ( R)  
0
r  n( n  1)(1  r )r n  2 dr

1 1
= 0 n(n  1)(1  r )r n 1 dr  n(n  1) 0 (1  r )r n 1 dr (  n,   2)

n( n  1) ( n)( 2) n( n  1)(n  1)! n  1


=  
 ( n  2) ( n  1)! n 1
4-25【Question】
If X 1 and X 2 are independent random variables following a gamma
distribution with parameters  and  , find E ( R 2 ) , where R 2  X 12  X 22
【Solution】
X 1 , X 2 ~ Gamma( ,  ) , indep
 
f X 1 , X 2 ( x1 , x 2 )  x1 1e x x 2 1e x , x1 , x 2  0
1 2

( ) ( )
E ( R 2 )  E ( X 12  X 22 )
 
= 0  0
( x12  x 22 ) f x1 , x2 ( x1 , x 2 ) dx1 dx 2

    1  x 
  
0 0 ( )
x1 e
( )
x 2 1 e  x dx1 dx 2 
1 2

    1 x 
 
0 0 ( )
x1 e
( )
x 2 1 e x dx1 dx 2
1 2

 (  2)   

= 0 0 ( ) x2 e 2 dx2
 1  x2  1  x
 x e dx 
( ) 2 ( )
2 2

(  2) (  2) 2 (  1)
=  
( )2 ( )2 2
4-26【Question】
Referring to Example B in Section 4.1.2 what is the expected number of coupons
needed to collect r different types, where r  n ?
【Solution】
X 1 :the number of trails up to and including the trail on which the first coupon is
collected.
X 2 :the number of trails from that point up to and including the trail on which the
next coupon different from the first is obtained,……and so on, up to X n .

 X k ~ geo(
n  r 1
n
) k  1,  , n X k are indep.
n
E( X k ) 
n  r 1
Let X  X 1  X 2         X r
r
E( X )  E( X 1         X r )   E( X k )
k 1

n n n
=(     )
n n 1 n  r 1
n n r
1 1
= n  n
k 1 k k 1 k

= n(log n     n )  n(log(n  r )     n r )
n
= n(log   n   n r ) , rn
nr

4-30【Question】
1
Find E ( ) , where X is a Poisson random variable.
X 1
【Solution】
X ~ p ( ) ,   0
k
p ( X  k )  e 
k!
k  0,1,2,  
1 
1 k 
k 
k 1
E( )  ( )e    e    e  
X 1 k 0 k  1 k! k  0 ( k  1)! k 1 k!

e   k e   1  e 
= (  1)  (e  1) 
 k  0 k!  
4-34【Question】
Let X be uniform on [0,1], and let Y  X . Find E (Y ) by (a) finding the
density of Y and then finding the expectation and (b) using Theorem A of
section 4.1.1.
【Solution】
X ~ U [0,1]
 f X ( x )  1, x  [0.1]
dx
Y X  X Y2  2y
dy

(a) f Y ( y )  f X ( y ) J  1  2 y  2 y
2

1 1 2
E (Y )   y  2 ydy   2y dy 
2
0 0 3
1 2
(b) E (Y )  0 x dx 
3

4-35【Question】
Find the mean of a negative binomial random variable.(Hint:Express the
random variable as a sum.)
【Solution】
The negative binomial random variable Y is the number of the trials on which the r-th
success occurs, in a sequence of independent trials with constant probability p of
success on each trail. Let x i denote a random variable defined as the number of the
trail on which the i-th success occurs, for i=1,2,….,r. Now define
Wi  X i  X i 1 , i  1,2,  , r
r
where X 0 is defined to be zero. Then we can write X r  Y  W
i 1
i .

Notice that the random variable W1 , W2 ,  , Wr have identical geometric distribution


and are mutually independent.
r r r
1 r
 E (Y )  E ( Wi )   E (Wi )   
i 1 i 1 i 1 p p
4-37【Questuon】
For what values of p is the group testing of Example C in section 4.1.2 inferior
to testing every individual.
【Solution】
1
E ( N )  n(1 
k
 pk ) n  m  k n, m , k  N (from 課本 p.121)

1 1 1
E ( N )  n  n(1   pk )  n  pk   p  k
k k k
1
If p  k , then the group testing is inferior to testing every individual.
k

4-38【Question】
Let X be an exponential random variable with standard deviation  . Find
p( X  E ( X )  k ) for k  2,3,4, and compare the results to the bounds from

Chebyshev’s inequality.
【Solution】
令 X ~  ( )
1 1 1
   E( X )  ,   Var ( X )  
  2

1 k k 1 1 k
p( x    k )  p ( x   ) = p( x  )  p( x  )
   
k 1 1 k
= p( x  ) (k  2 1  k  0  p( x  )  0)
 


 x
= k 1 e dx  e k 1 , k  2,3,  

Chebyshev’s inequality
k p ( x    k )  e  k 1
1
p ( x    k ) 
k2
1
2 e 3  0.04979  0.25
4
1
3 e 4  0.01832  0.1111
9
1
4 e 5  0.006738  0.0625
16
39 Show that Var(X - Y)  Var(X)  Var(Y) - 2Cov(X, Y).
Ans: Var ( X  Y )  E ( X  Y ) 2  [ E ( X  Y )]2
 E ( X 2  2 XY  Y 2 )  [ E ( X )  E (Y )]2
 E ( X 2 )  2 E ( XY )  E (Y 2 )  E 2 ( X )  2 E ( X ) E (Y )  E 2 (Y )
 E ( X 2 )  E 2 ( X )  E (Y 2 )  E 2 (Y )  2[ E ( XY )  E ( X ) E (Y )]
 Var ( X )  Var (Y )  2 cov( X , Y )

41 Find the covariance and the correlation of N i and N j , where N 1 , N 2 , …, N r


are multinomial random variables.(Hint:Express them as sums.)
Ans: N 1 , N 2 , …, N r :multinomial r.v
r r
n!
P N1... N r ( n1 ...nr )=
n1 !...n r !
n1
P1 ...Pr ,
nr
n
i 1
i =n,  P =1
i 1
i

n
 P ( ni )=   Pi ni (1  Pi ) n ni ,
Ni
n
 i
N i ~ Bin( n, Pi ), i  1...r ,  E ( N i )  n  Pi ,
Var ( N i )  n  Pi (1  Pi ).

n nn j
n!
 n n
n n nn n
E(Ni , N j )  i j  Pi i  Pj j  (1  Pi  Pj ) i j
n j 0 ni 0 ni !n j !(n  ni  n j )

n j  n!Pj j  Pi nn j nn  n 


n
n ( n  n j  1)!
= 
n 1
Pi i (1  Pi  Pj ) i j 
n j 0 n j !( n  n j  1)!
 ni 0 (ni  1)!(n  ni  n j )! 
n n
n!Pj j  Pi nn j 1  n  n j  1 ni 1 
=     Pi (1  Pi  Pj ) nni n j 

 ni 0  n j  1 
n j 0 ( n j  1)!( n  n j  1)! 
 n2
 
n

=  n(n  1) n  1  Pi Pj (1  Pj )


n nn j j 1

n 0
j  j 

 n 2  n  2  n j 1  n  2  n1 
i j  
= n ( n  1) P P   Pj (1  Pj ) nn j 1    Pj (1  Pj ) 1 

n j 1 n j  1  n 1  

 
= n(n  1) Pi PJ (1) n 2  0  n(n  1) Pi PJ

∴Cov(n i ,n j )=E(n i ,n j )-E(n i )E(n j )=n(n-1)P i P j -nP i nP j =- nP i P j

Cov(ni , n j )  nPi Pj  nPi Pj


P=  
Var (ni )Var (n j ) n  Pi (1  Pi )  n  Pj (1  Pj ) n Pi Pj (1  Pi )(1  Pj )

Pi Pj

(1  Pi )(1  Pj )

45 Two independent measurements, X and Y, are taken of a quantity μ. E(X)=E(Y)=


μ, but σ X and σ Y are unequal. The two measurements are combined by means
of a weighted average to give
Z = αX + (1-α)Y
Where α is a scalar and 0  α  1.
a Show that E(Z) = μ.
b Find α in terms of σ X and σ Y to minimize Var(Z).
c Under what circumstances is it better to use the average ( X  Y ) 2 than either X
or Y alone?

Ans:(a) E ( Z )  E (X  (1   )Y )  E ( X )  (1   ) E (Y )        
(b) Var ( Z )  Var (X  (1   )Y )   2 X2  (1   ) 2  Y2  f ( )
f ( )  2 X2  (1   ) Y2
f ( )  2 X2  2 Y2 ﹥0
 f ( )  0 時有 min

 Y2
 2 X2  2(1   ) Y2  2 X2  2 Y2  2 Y2   
 X2   Y2

X Y  2   Y2
(c) Var ( Z )  Var ( ) X , Var ( X )   X2 , Var (Y )   Y2
2 4

 
2 2
      3
X Y 2 2 2
X Y X
4
 X2 Y2 2 2 2
  Y   X  3Y
4
1 2

X Y
is better, when  3 X
3
2
2
Y

49 Let T = k 1 X k , where the X k are independent random variables with means


n

μ and variances σ 2 . Find E(T) and Var(T).

n n n
n(n  1)
Ans: E (T )  E ( k  X k )  E (k  X k )  kE ( X k )  
k 1 k 1 k 1 2
n
Var (T )  Var ( k  X k ) ( X 1 , X 2 ,, X n are independent)
k 1
n n
n(n  1)(2n  1) 2
 Var (k  X k )  k 2Var ( X k )  
k 1 k 1 6

51 If X and Y are independent random variables, find Var(XY) in terms of the means
and variances of X and Y

Ans: X , Y independent
Var ( XY )  E ( XY ) 2  E 2 ( XY )
 E ( X 2Y 2 )  E 2 X  E 2Y ( X , Y indep,  E ( XY )  EX  EY )
 EX 2  EY 2  E 2 X  E 2Y
(X , Y indep,  X 2 , Y 2 indep, E ( X 2  Y 2 )  EX 2  EY 2 )
 (VarX  E 2 X )  (VarY  E 2Y )  E 2 X  E 2Y
 VarX  VarY  VarX  E 2Y  E 2 X  VarY  E 2 X  E 2Y  E 2 X  E 2Y
 VarX  VarY  VarX  E 2Y  E 2 X  VarY
  X2  Y2   Y2 X2   X2  Y2

53 Let (X,Y) be a random point uniformly distributed on a unit disk. Show that
Cov(X,Y)=0, but that X and Y are not independent.

1
Ans: f X ,Y ( x, y )  , x2  y2  1

1 x 2
1 2
f X ( x)   dy  1 x2 , x    1,1
 1 x 2
 

2
同理 f y ( y )  1 y2 , y    1,1

 f x , y ( x, y )  f x ( x )  f y ( y ) x 2  y 2  1
 X , Y are not independent
1
2 2
EX   x
1
1  x 2 dx  0 ( x 1  x 2 為奇函數 )

同理, EY  0

 x y 
1 1 x 2 1 1 1 1 x 2 1 1
E ( X ,Y )    x y dydx  dx   x  0dx  0
2
 1 x 2
1  1 x 2
 2 1 2 1

 Cov( X , Y )  E ( XY )  EX  EY  0  0  0
 Cov( X , Y )  0, but that X and Y are not independent.
54 Let Y have a density that is symmetric about zero, and let X=SY, where S is an
1
independent random variable taking on the values +1 and –1 with probability
2
each. Show that Cov(X,Y)=0, but that X and Y are not independent.

1 1
Ans: E ( S )  1 P( S  1)  (1)  P( S  1)   0
2 2
Cov( X , Y )  Cov( SY , Y )
 E ( SY  Y )  E ( SY ) E (Y )
 E ( S ) E (Y 2 )  E ( S ) E (Y ) E (Y ) (S和Y indep.)
0


0 x y
 1 1 1
f x│y ( x│y )   x  y f X ( x)  f Y ( x)  f Y ( x)
2 2 2
1 x y
 2
 f x│y  f x
 X , Y are notindependent.
55 In Section 3.7, the joint density of the minimum and maximum of n independent
uniform random variables was found. In the case n =2, this amounts to X and Y,
the minimum and maximum, respectively, of two independent random variables
uniform on [0,1], having the joint density
f ( x, y )  2, 0x y
a Find the covariance and the correlation of X and Y. Does the sigh of the
correlation make sense intuitively?
b Find E(X│Y=y) and E(Y│X=x). Do these results male sense intuitively?
c Find the probability density functions of the random variables E(X│Y) and
E(Y│X).
d What is the linear predictor of Y in terms of X (denoted by Ŷ =a + bX) that has
minimal mean squared error? What is the mean square prediction error?
e What is the predictor of Y in terms of X[ Ŷ =h(x)] that has minimal mean
squared error? What is the mean square prediction error?

Ans:
fX,Y (x, y)  2, 0xy1
y y
f Y ( y)   0
f X ,Y ( x, y ) dx  
0
2dx  2 y , y  (0,1)
1 y 1 y 1
E ( XY )   
0 0
f X ,Y ( x, y ) dxdy   
0 0
2 xydxdy 
4
1 1 1
EX   0
xf X ( x) dx  0
2 x(1  x) dx 
3
a. 1 1 1
EX 2   x 2 f X ( x) dx   2x (1  x ) dx 
2
0 0 6
1 1 2
EY   yf Y ( y ) dy   2y dx 
2
0 0 3
1 1 1
EY 2   y 2 f Y ( y ) dx   2y dy 
3
0 0 2
1 1 2 1
VarX  EX 2  E 2 X  ( ) 
6 3 18
1 2 1
VarY  EY 2  E 2Y   ( ) 2 
2 3 18
1 1 2 1
Cov( X , Y )  E ( XY )  EX  EY    
4 3 3 36
1
Cov( X , Y ) 36 1
  
VarX VarY 1 1 2

18 18
f X ,Y ( x, y ) 1
b. f X│Y ( x│y )   , y  (0,1), x  (0, y )  X│Y  y ~ U (0, y )
fY ( y) y

y
 E ( X│Y  y )  , y  (0,1)
2
f ( x, y ) 1
f ,Y│X ( y│x)  X ,Y  , x  (0,1), y  ( x,1)  Y│X  x ~ U ( x,1)
f X ( x) 1 x
x 1
 E (Y│X  x )  , x  (0,1)
2

y 1
c. 令W1  ( X│Y )  ,  fW1 (W1 )  2 f Y (2W1 )  8W1 , W1  (0, )
2 2
X 1 1
W2  (Y│X )  ,  fW2 (W2 )  2 f X (2W2  1)  8(1  W2 ), W2  ( ,1)
2 2
d. Y之最佳線性預測值:Yˆ  a  bX

2 1 1 1
a   Y  b X    
3 2 3 2 1 1
Yˆ   X , x  (0,1)
 1 2 2
b  ( Y ) 
X 2

1 1 1 1 1
:E (Y 
此時之M.S.E.  X ) 2   Y2 (1   2 )  (1  ) 
2 2 18 4 24
X 1
e. Y之最佳預測值:Yˆ  E (Y│X )  , x  (0,1)
2
. E (Y  E (Y│X )) 2  EY 2  E ( E 2 (Y│X ))
此時M.S.E:
1 ( x  1) 2
  E( )
2 4
1 ( x  1)
2
1
  f X ( x ) dx
2 0 4
1 1 1
   ( x  1) 2 (1  x )dx
2 2 0
1 11
 
2 24
1

24

58 Let X and Y be jointly distributed random variables with correlation  XY ; define


the s tan dardized random variables X ~ and ~ as ~ = ( X  E ( X )) Var ( X )
Y X
~ ~
and Y = (Y  E (Y )) Var (Y ) . Show that Cov( X , Y ~ )=  .
XY

~ ( X  EX ) ~ (Y  EY )
Ans: X  , Y  , correlatio n: X ,Y
VarX VarY

~ ~ ( X  EX )  (Y  EY ) X  EX Y  EY
Cov ( X , Y )  E ( )  E( )  E( )
VarX  VarY VarX VarY
E ( X  EX )  (Y  EY )

VarX  VarY
  X ,Y

67 A fair coin is tossed n times, and the number of heads, N, is counted. The coin is
then tossed N more times. Find the expected total number of heads generated by
this process.

1
Ans: N ~ B (n, )
2
令 X 標丟 N=m 次,出現頭之次數,m=0,1,2,…,n
1
 X│N  m ~ B( m, ), x  0,1,2,, m
2
N n
 EX  E ( E ( X│N ))  E ( ) ( Note:若X ~ B( n, p), 則EX  np)
2 4
n
EN 
2
3
∴The expected total number of heads generated by this process is n
4

70
Let the point (X,Y) be uniformly distributed over the half disk x 2  y 2  1 , where
y  0. If you observe X, what is the best prediction for Y ? If you observe Y ,
what is the best prediction for X ? For both questions , “best” means having the
minimum mean squared error .
sol
2
f X ,Y ( x, y ) = . x 2  y 2  1 , y>0

1 x 2 2 1 x 2 2
f X (x) =  fx, y ( x. y ) dy   dy  1 x2 ,x  (1,1)
0  0 
1 y 2 2 1 y 2 4
f Y
( y)  
 1 y 2
f x , y ( x, y )dx 
   1 y 2
dx 

1 y2 ,y  (0,1)

2
f X ,Y ( x, y )  1
f y x ( y x)    , x  (1,1), y  (0, 1  x 2 )
f X ( x) 2 1 x 2
1 x2

2
f X ,Y ( x, y )  1
f X Y ( x y)   , x  (0,1), y  (  1  y 2 , 1  y 2 )
f Y ( y) 4 2 1 y 2
1 y2

 在給定 X  x 之下, Y 之最佳預測值
1 x2
E (Y X  x )  , x  (1,1) , (Note: Y X  x ~ U (0, 1 x2 ) )
2
 在給定 Y  y 之下, X 之最佳預測值
E ( X Y  y )  0, y  (0,1) , (Note: X Y  y ~ U ( 1 y2 , 1 y2 ) )

P159 #71
Let X and Y have the joint density
a Find Cov(X,Y) and the correlation of X and Y.
b Find E ( X Y  y ) and E (Y X  x).

c Find the density functions of the random variables E ( X Y ) and E (Y X ).

sol
a.
 
f X ( x)  
x
f X ,Y ( x, y ) dy  x
e  y dy  e  y , x  (0, )

1 1
 EX  1 ( X ~  (1) 且若 Y ~  ( ) , EY  ,VarY  2 ) VarX  1
 
y y
fY ( y)  
0
f X ,Y ( x, y )dx  
0
e  y dy  ye  y , y  (0, )

 EY   

0
f Y ( y ) dy  

0
y 2 e  y dy  2 , EY 2  
0

y 2 f Y ( y ) dy  
0

y 3 e  y dy  6

 y  y  1 3 y
E ( XY )    0 0
xyf X ,Y ( x, y ) dxdy   
0 0
xye  y dxdy  0 2
y e dy  3

 Cov( X , Y )  E ( XY )  EX  EY  3  1  2  1 , VarY  EY 2  E 2Y  6  4  2
Cov( x, y ) 1 1
correlatio n  XY   
VarX  VarY 1 2 2
b.
f X ,Y ( x, y ) e  y 1
f X Y (x y  y
 , x  (0, y ), y  (0, )
f Y ( y ) ye y
y
 E ( X Y  y)  , y  (0, ), ( X Y  y ~ U (0, y ))
2
f X ,Y ( x, y ) ey
f Y X ( y x)   x
 e ( y  x ) , x  (0, ), y  ( x, )
f X ( x) e
 
 E (Y X  x )  x
y  fY X ( y x )dy   x
y  e  ( y  x ) dy  x  1, x  (0, )

Y
令W1  E ( X Y )  , f W1 ( w1 )  2 f Y (2w1 )  4W1e  2 w1 , w1  (0, )
2

令W2  E (Y X )  x  1, f W2 ( w2 )  f X ( w2  1)  e ( w2 1) , w2  (1, )

P160 #75
Find the moment-generating function of a Bernoulli random variable , and use it
to find the mean, variance, and third moment.
sol
P ( X  0)  1  p, P ( X  1)  p
M X (t )  E (e tx )  (1  p)  e 0  pe  pe t , t  R
EX  M X (0)  p, EX 2  M X (0)  p, EX 3  M X (0)  p
VarX  EX 2  E 2 X  p  p 2  p(1  p)

P160 #84
Assuming that X ~ N (0,  2 ) ,use the m.g.f. to show that the odd moment are zero
and the even moments are
(2n)! 2 n
 2n 
2 n (n!)
sol
M (t )在t  0之n次導數皆存在, n  1
由propertyB知X之任意次動差皆存在
1
 2t 2
又利用e 2 之Tayior展開式 得 :
1
 2t 2 
1 1 2 2 n 
 2n t 2n 
 (2n)! 2 n  t 2 n 
nt n
M (t )  e 2  (  t )  n   n    
n 0 n! 2 n 0 2 ( n!) n  0  2  ( n!)  ( 2 n)! n0 n!
(2n)! 2 n
 2 n1  0,  2 n  ,n  0
2 n  (n!)

P161 #88
If X is a nonnegative integer-valued random variable, the probability-generating
function of X is defined to be

G ( s)  s
k 0
k
Pk

where p k  P ( X  k ) .

a Show that
1 dk
pk   G ( s) s 0
k! ds k

b Show that
dG
s 1  E ( X )
ds
d 2G
s 1  E  X ( X  1) 
ds 2
c Express the probability-generating function in terms of moment-generating
function.

d Find the probability-generating function of the Poisson distribution.

sol
b.
G ( s ) uniformly converges for s  1

 可將此power series逐eri 而得G(s)之導數, 即



g (s)   kpk s k 1......(1)
k 1

g  (s)   k (k  1) pk s k 2 ......(2)
k 2

 k  k n

一般有G (s)   k (k  1)(k  2).....(k  n  1) pk s    n! pk s
(n) k n

k n k n  n 

此級數在 s  1

仍收斂  在上一式中若令s  0, 則除了常數項其餘均為0,


故得 p n  g (n) (0) n!
若EX存在, 則由(1)可得
EX  g (1)
而若E X(X  1) 亦存在, 由(2)可得
E X(X  1)  g (1)
c
假定s  0 令X之 mgf 為 M X (t)

G X ( s)  E ( s X )  E (e X (ln S ) )  M X (ln S ) for some s  0

~ P( ), 則 X 之mgf : M X (t )  exp (e  1)


t
由課本 P144 example A 知 , 若X
 由 (c)之結果得GX ( s )  M X (ln S )  exp  ( s  1) for all s >0

P161 #93
Find expressions for the approximate mean and variance of Y = g(X) for
(a) g(x)= x , (b) g(x)=logx, and (c) g(x)= sin 1 x .
sol
a
g ( x)  x, E( X )  , Var ( X )   2

E ( g ( x ))  g ( E ( X ))  
2
   2
Var ( g ( x))  (  )  Var ( X ) 
  4

b
g ( x)  log X E( X )   Var ( X )   2
E ( g ( x))  g ( E ( X ))  log 
2
Var ( g ( x))   (log  ) 
2
Var ( X ) 
2

c
g ( x )  sin 1 E( X )   Var ( X )   2

E ( g ( x ))  g ( E ( X ))  sin 1 
2

Var ( g ( x))  (sin 1  )   2
Var ( X ) 
1  2

P161 #96
Two sides, x 0 and y 0 , of a right triangle are independently measured as X and
Y, where E(X)= x 0 and E(Y)= y 0 and Var ( X )  Var (Y )   2 . The angle between
the two sides is then determined as
Y 
  tan 1  
X
Find the approximate mean and variance of  .
sol
由課本 P152 知 Z  g(x, y), 則
1 2 g(μ( 2 1 2 g(μ( 2  2 g(μ(
EZ  g(μ(  σX ( )  σY ( )  σ X,Y
2 x 2 y xy
g(μ( 2 g(μ( 2 g(μ( g(μ(
VarZ  σ 2X ( )  σ 2Y ( )  2σ X,Y ( )( ),
x y x y
where μ denote the point (μ X , μ Y )
X, Y are independen t random variables . EX  x 0 , EY  y 0
VarX  VarY  σ 2
Y
θ  tan 1 ( )
X
y
令θ(x, y)  tan 1 ( )
x
θ y θ x  2θ 2xy  2θ  2xy  2θ y2  x2
  2 ,    
x x  y 2 y x 2  y 2 x 2 (x 2  y 2 ) 2 y 2 (x 2  y 2 ) 2 xy (x 2  y 2 ) 2
y0 x 0 y0σ 2 x 0 y0σ 2 y
 EZ  tan 1 ( )   tan 1 ( 0 ) (X, Y independen t  σ xy  0)
x0 
x 0  y0
2 2
2
(x 0  y 0 )
2 2 2
x0
2 2 2 2
y σ x σ σ2
VarZ  0
 0
 (Note : X, Y independen t )
(x 02  y 02 ) 2 (x 02  y 02 ) 2 x 02  y 02

Vous aimerez peut-être aussi