Vous êtes sur la page 1sur 10

1

Revision of random variables with some small extensions

In this chapter I shall remind you about some of the results about random variables that you learnt last year in MTH4108 and MTH4106.

1.1

Joint distribution of two discrete random variables

We will consider an example to remind you of the notation and ideas. Suppose we throw two fair tetrahedral dice. Let X be the score on the rst die and Y the maximum of the two scores. The sample space is {(1, 1)(1, 2) . . . (4, 3)(4, 4)}. The joint probability function (or probability mass function) of X and Y , written as fX,Y (x, y ), can be found by nding the values of X and Y for each point in the sample space. So for example the point (2, 1) corresponds to X = 2, Y = 2 and the point (3, 4) corresponds to X = 3, Y = 4. The full joint probability function is given by: Y 1 1 X 2 3 4
1 16

2
1 16 2 16

3
1 16 1 16 3 16

4
1 16 1 16 1 16 4 16 7 16 4 16 4 16 4 16 4 16

0 0 0
1 16

0 0
3 16

0
5 16

Also shown in the table are the marginal probability function of X found by fX (x) = y fX,Y (x, y ) and the marginal probability function of Y fY (y ) = x fX,Y (x, y ). We can see why they are called marginal, they appear as the margins of this table. The conditional probability function of, for example X given that
1

Y = 3 can be found by f (x|Y = 3) = so x 1 2 3 4 1 3 f (x|Y = 3) 1 5 5 5 0 1.2 Continuous random variables fX,Y (x, 3) fY (3)

Recall that if X is any random variable then we dene the cumulative distribution function (cdf) of X by FX (x) = P (X x) x R If FX is a continuous function we say X is a continuous random variable. FX is differentiable almost everywhere and we dene the probability density function (pdf) of X by fX (x) = dFX dx

so if we know FX we can nd fX by differentiation. Similarly if we know fX we can nd FX by integration


x

FX (x) =

fX (t) dt.

We usually drop the X subscript on f and F . We can nd the probability of an interval (a, b) where a < b by P (a < X b) = F (b) F (a)
b

=
a

f (x) dx.

Note that f (x) 0 and

f (x) dx = 1.

Also that the cdf is, by its denition as a probability, an increasing function which has lower limit 0 and upper limit 1. 1.3 Two or more continuous random variables

The ideas about two continuous random variables are similar to those for discrete random variables but we replace summation of the probability function by integration of the probability density function. So for two continuous random variables X and Y the joint pdf is given by fX,Y (x, y ) and if A is a region of R2 then p((X, Y ) A) =
A

fX,Y (x, y ) dx dy.

We have that f (x, y ) 0 and


f (x, y ) dx dy = 1.

The marginal pdfs can be found by integrating over the other variable so fX (x) = f (x, y ) dy

and fY (y ) =

f (x, y ) dx.

The conditional pdf of X |Y = y is found using f (x|Y = y ) =


3

fX,Y (x, y ) . fY (y )

Example 1.1. Suppose V and W have a joint pdf given by f (v, w) = Kvw Find 1. K , 2. the joint cumulative distribution function for V, W , 3. P (V < 0.5, W < 0.75), 4. the marginal pdf of V . 1. We use the fact that the integral of the pdf is one.
1 1

0<v<1

0<w<1

1 =
0 1 0

Kvw dv dw
1

v2 = K w 2 0 1 w = K dw 0 2 1 w2 = K 4 0 K = 4 and so K = 4. 2.

dw
0

F (v, w) = P (V < v, W < w)


w v

= = v w2
4
0 2 0

4tu dt du 0 < v < 1, 0 < w < 1

3. F (0.5, 0.75) = 0.52 0.752 = 9/64. 4.


1

fV (v ) =
0

4vw dw 0<v<1

= 2v

Example 1.2. X and Y have a joint pdf given by f (x, y ) = C (1 y ) and zero otherwise. This example is similar to the previous one but we have to be careful about the limits. 1. Find C 2. The marginal pdfs of X and Y . 3. The conditional pdfs of X |Y = y and Y |X = x. 1.
1 y

0xy1

1 =
0 1 0

C (1 y ) dx dy C [(1 y )x]y 0 dy C (y y 2 ) dy
0 1 0

=
0 1

y2 y3 = C 2 3 1 = C 6 and so C = 6.
5

2. The marginal pdf of X is given by


1

fX (x) =
x

6(1 y ) dy
1

y2 = 6 y 2 x x2 1 = 6 1 x+ 2 2 = 3(1 x)2 0 x 1. The marginal pdf of Y is given by


y

fY (y ) =
0

6(1 y ) dx

= 6(1 y ) [x]y 0 2 = 6(y y ) 0 y 1. 3. The conditional pdf of X |Y is given by 6(1 y ) = y 1 6(1 y )y 0xy

so it is uniform on the interval (0, y ). The conditional pdf of Y |X is given by 6(1 y ) 2(1 y ) = 3(1 x)2 (1 x)2 x y 1.

You have met a number of results about Expectation, means and variances. For example E[X ] = xfX (x) dx = xfX,Y (x, y ) dy dx (1)

since the marginal pdf is obtained from the joint by integrating out y.
6

In general we have E[g (X, Y )] = g (x, y )fX,Y (x, y ) dy dx

Var[X ] = E[X E[X ]]2 = E[X 2 ] E[X ]2

Cov[X, Y ] = E[(X E[X ])(Y E[Y ])] = E[XY ] E[X ] E[Y ] We say that X and Y are independent if fX,Y (x, y ) = fX (x)fY (y ). If X and Y are independent then Cov[X, Y ] = 0 since E[XY ] = = = xyfX,Y (x, y ) dx dy xyfX (x)fY (y ) dx dy xfX (x) dx yfY (y ) dy

= E[X ] E[Y ] The converse is not true in general. There are pairs of random variables with zero covariance but which are not independent. Example 1.3. For the random variables in Example 1.1 nd E[V ], E[W ] and E[V W ]. Show that V and W are independent.
1

E[V ] =
0

2v 3 2v dv = 3
2

=
0

2 3

By symmetry E[V ] = 2/3 also.

E[V W ] =
0 1 0

vw(4vw) dv dw 4w
0 3 1 2

= =

v3 3

dw
0

4 w 3 3 0 4 = 9 Thus Cov[V, W ] = 4/9 (2/3) (2/3) = 0. f (v, w) = 4vw = 2v 2w = f (u) f (w) and hence V and W are independent. Example 1.4. Find E[XY ] for the random variables in Example 1.2.
1 y

E[XY ] =
0 1 0

xy 6(1 y ) dx dy x2 2
y

=
0 1

6y (1 y ) dy
0

=
0

3y 3 (1 y ) dy
1 0

y4 y5 = 3 4 5 1 1 = 3 4 5 3 = 20

You have also met the correlation . It is dened by = Cov[X, Y ] Var[X ] Var[Y ]
8

You saw a proof that 1 1. You have also met conditional means and variances. Theorem 1.1. EX [X ] = EY [EX |Y [X |Y ]] Proof. We have E[X |Y ] = E [E [X |Y ]] = = = = xf (x|y ) dx therefore xf (x|y ) dx f (y ) dy xf (x|y )f (y ) dx dy xf (x, y ) dy dx xf (x) dx

= E[X ]

Corollary 1.1. EX [g (X )] = EY [EX |Y [g (X )|Y ]] since we can replace x by g (x) in the previous proof. We dene the conditional variance by Var[X |Y ] = E[X 2 |Y ] (E[X |Y ])2 The following theorem was stated in MTH4106 but not proved. Theorem 1.2. Var[X ] = E[Var[X |Y ]] + Var[E[X |Y ]]

Proof. E[Var[X |Y ]] = = = = = = E[E[X 2 |Y ]] E[(E[X |Y ])2 ] by def of conditional variance E[X 2 ] E[(E[X |Y ])2 ] by the corollary E[X 2 ] E[X ]2 E[(E[X |Y ])2 ] + E[X ]2 add and subtract E[X ]2 Var[X ] E[(E[X |Y ])2 ] + E[X ]2 by def of variance Var[X ] {E[(E[X |Y ])2 ] (E[E[X |Y ]])2 } by result for E[X ] Var[X ] Var[E[X |Y ]] by def of variance

10

Vous aimerez peut-être aussi