Vous êtes sur la page 1sur 25

Lesson 3.

Random variables Statistics I ~1~


______________________________________________________________________

LESSON 3. RANDOM VARIABLES

3.1. ONE-DIMENSIONAL RANDOM VARIABLES

3.1.1. Definition

The outcomes in the sample space generated from a random experiment can be
qualitative or quantitative:

Example of qualitative outcomes: tossing a coin (heads or tails).


Example of quantitative outcomes: throwing dice (1, 2, 3, 4, 5, 6).

The quantification of qualitative outcomes from a sample space can be useful and,
by means of numerical measures, we may study the random behaviour.

The random variable concept provides us with a means for relating any outcome
to a quantitative measure. The random variable is a function that assigns a numerical
value to each elementary event in the sample space.

A random variable is a function X determined by the application

X :

that associates a real number with each elementary event.

Two possibilities:

a) When having numerical outcomes from the experiment, the potential values for
the variable can be the same than those for the experiment,
b) But, when having qualitative outcomes from the experiment, it will be necessary to
assign real values, that is, arbitrarily chosen values.

The random variable induces a probability in the real numbers.

Property of the random variable

A real function of one or more random variables is also a random variable.

______________________________________________________________________
Degree in Economics / Estefana Mourelle Espasandn 2016/2017
Lesson 3. Random variables Statistics I ~2~
______________________________________________________________________

The random variable (associated to a random experiment) is said to be defined or a


probability distribution model is said to be built when the possible values for the variable
with their respective probabilities have been specified.

CUMULATIVE PROBABILITY DISTRIBUTION

Let X : be a random variable. The cumulative probability distribution

of X is a function F that assigns, to each real number x, the probability that the
variable takes on values less than or equal to that number. That is,

F(x) = P(X x) = P[X (-; x]], x

In order to know the behaviour of a random experiment, it is essential to know


how the probability is spread among the possible values for variable X from that
experiment.

Properties of cumulative probability distributions

1. 0 F(x) 1, x

2. lim F x 1
x

3. lim F x 0
x

4. F is a non-decreasing monotonic function [x1 < x2 F(x1) F(x2)]

5. P(x1 < X x2) = F(x2) - F(x1)

Particular case: P(X > x) = 1 - F(x)

______________________________________________________________________
Degree in Economics / Estefana Mourelle Espasandn 2016/2017
Lesson 3. Random variables Statistics I ~3~
______________________________________________________________________

3.1.2. Discrete random variables: probability distribution function and


cumulative probability distribution

A random variable is a discrete random variable if it can take on only a


countable (finite or countable infinite) number of values, among which the whole
probability is spread.

Probability distribution function (or probability function)

Let X be a discrete random variable taking on values xi (i=1,..., n) with probabilities


pi = P(X=xi). The function that assigns to each value xi its probability pi is called
probability distribution function, probability function or probability mass function
of a variable X:

P(X=xi) = pi
Probability function

Properties of the probability functions

1. 0 P(xi) 1, xi

2. P X x p
i 1
i
i 1
i 1

k n

3. P(xk X xk+n)= p
i k
i pk pk n

In the same manner, the distribution of a variable can be characterized with the
cumulative probability distribution, F(xi):

F(xi) = P (X xi) = P X x
x j xi
j Cumulative probability distribution

______________________________________________________________________
Degree in Economics / Estefana Mourelle Espasandn 2016/2017
Lesson 3. Random variables Statistics I ~4~
______________________________________________________________________

This expression reflects the relationship between the probability function and the
cumulative probability distribution.

The cumulative probability distribution shows jump discontinuities in points xi


(where there exists probability) of magnitude pi, in that point, and it is constant in the
intervals between jump points. Therefore, F(x) is a step function.

Conclusion: a discrete random variable is defined when the set of possible values it
can take on and their respective probabilities are determined [through F(xi) or
P(X=xi)].

3.1.3. Continuous random variables: density functions and cumulative


probability distribution

A random variable is a continuous random variable if it can take on a non-


countable infinite number of values; that is, it can take any value in an interval.

Continuous random variables do not have probability mass in the points:

P(X=x) = 0, x
,

so that in this framework we talk about probability in an infinitesimal framework,


P(x < X x +x), x 0.

Probability density function

The probability density function of the random variable, f x , is the derivative

of the cumulative probability distribution:

F x x F x
lim F x f x
x 0 x

Remember:

Discrete r.v. probability function; continuous r.v. density function


______________________________________________________________________
Degree in Economics / Estefana Mourelle Espasandn 2016/2017
Lesson 3. Random variables Statistics I ~5~
______________________________________________________________________

Therefore, the cumulative probability distribution is obtained in the following way:

f x dx
a
F(a) = P (X a) = P(x < X x+dx) =

All infinitesimal
intervals on the
left of a

The density function and the cumulative probability distribution are two different
manners of getting the probability of an event.

Properties of the density functions

1. Conditions a function f x must fulfill in order to be a density function of a


continuous random variable:

i. f x 0, x

ii. The total area under the function f x is equal to one: f x dx 1 .

f x dx .
b
2. P (a < X < b) = P (a X < b) = P (a < X b) = P (a X b) = F(b) F(a) = a

Thus, if we have a continuous variable, the probability associated to an interval will


be the same, independently of having an open, closed or mixed interval. However, this
statement is not true with a discrete variable, when points X=a and X=b have a non-
null associated probability.

Conclusion: a continuous random variable is defined when the interval of values


characterizing it and its cumulative probability distribution or its density function are
determined.

______________________________________________________________________
Degree in Economics / Estefana Mourelle Espasandn 2016/2017
Lesson 3. Random variables Statistics I ~6~
______________________________________________________________________

3.1.4. Properties of one-dimensional random variables

In Descriptive Statistics we defined several measures for describing different


characteristics of the population where the data come from. The formulas and
parameters explained in Descriptive Statistics can be adapted to mathematic calculus
and its use in Probability Calculus; only a change of mind is necessary.
Despite the mentioned similarities, calculations in Descriptive Statistics do refer to
sample values (if we study certain population, each potential sample might have
different features, normally quite similar, but not the same) but in Probability Calculus
we deal with populations and their values. It is also true that population features are
quite frequently unknown, so that we have to work with estimations that come from
samples.
Focusing on that difference between sample and population, we appreciate that
between both types of parameters there is a relationship that is very similar to the one
between relative frequency and probability following the frequency approach; the
greater the size of the sample we use for calculations, the closer the values of the
parameters (and closer to the limit represented by the population parameter).

Moments: expected value and variance

Moments with respect to the origin

The moments of a r.v. X are the expected values of certain functions of X.

Let consider g(X)=Xr. The moment with respect to the origin of order r, r,
of the random variable X is


r E X r

Its expression, according to the type of random variable, is given by:

______________________________________________________________________
Degree in Economics / Estefana Mourelle Espasandn 2016/2017
Lesson 3. Random variables Statistics I ~7~
______________________________________________________________________

x
xi
r
i P X xi X discrete r.v.


r E X r

x r f x dx X continuous r.v.

Property

If there exists the moment with respect to the origin of order r (r), then all the
moments with respect to the origin of order s (s), with s < r, exist.

Moments with respect to the mean

Let consider g(X)=(X-)r, where =E(x). The moment with respect to the
mean of order r, r, of the random variable X is defined as

r E X r

Its expression, according to the type of random variable, will be:

x
xi
i r P X xi X discrete r.v.

r E X r

x f x dx
r
X continuous r.v.

Property

If there exists the moment with respect to the mean of order r (r), then all the
moments with respect to the mean of order s (s), with s < r, exist.

______________________________________________________________________
Degree in Economics / Estefana Mourelle Espasandn 2016/2017
Lesson 3. Random variables Statistics I ~8~
______________________________________________________________________

This property comes from the following theorem:

Moments with respect to the mean can be calculated, if they exist, as a function of
moments with respect to the origin.

Moments with respect to the mean Moments with respect to the origin

2 E( X )2 2 12

3 E( X ) 3 3 31 2 213
4 E( X )4 4 41 3 612 2 314

EXPECTED VALUE

The expected value or mathematical expectation of g(X), function of the


random variable X, is defined as

g x P X x
xi
i i X discrete r.v.

E g X

g x f x dx

X continuous r.v.

Expected value of X

If g(X)=X, the expected value or mathematical expectation of X would be

x i P X xi X discrete r.v.
xi

E X

x f x dx X continuous r.v.

______________________________________________________________________
Degree in Economics / Estefana Mourelle Espasandn 2016/2017
Lesson 3. Random variables Statistics I ~9~
______________________________________________________________________

These expressions correspond with the definition of a random variable X mean,


so that in this case we would indistinctly talk about expected value, mathematical
expectation or mean of variable X.

Properties of the expected value

1st) The mathematical expectation of a constant equals the actual constant

E c c , c constant

2nd) A change of origin in r.v. X affects its expected value

E X c E X c , c constant

3rd) A change of scale in r.v. X affects its expected value

Ec X c E X , c constant

From properties 2 and 3, it is concluded that the mathematical expectation of a linear


combination of random variable X is given by:

Ea b X a b E X

4th) The mathematical expectation of the values deviations with respect to the mean of
the random variable equals zero

E X 0

Measures of position

These measures show the location or the values around which the variable is set. They
can be grouped into:

______________________________________________________________________
Degree in Economics / Estefana Mourelle Espasandn 2016/2017
Lesson 3. Random variables Statistics I ~ 10 ~
______________________________________________________________________

i) Central position measures

a) Mean, expected value or mathematical expectation ()

1 E X

1 E X
b) Median (Me)
1 E X
Value of the random variable X that simultaneously satisfies

P(X Me) = P(X Me) = 0.5

c) Mode (Mo)

P X Mo max P X xi , discrete case


xi

Value of the r.v. X that satisfies:

f Mo max f x , continuous case


x

ii) Non-central position measures: quantiles

These measures are based on the mean.

Quantiles are values from the distribution that divide it into equal parts. The most
common types of quantiles are:

- Quartiles: these are three values that divide the distribution into four equiprobable parts.

- Deciles: these are nine values that divide the distribution into ten equiprobable parts.

- Percentiles: these are ninety nine values that divide the distribution into a hundred
equiprobable parts.

______________________________________________________________________
Degree in Economics / Estefana Mourelle Espasandn 2016/2017
Lesson 3. Random variables Statistics I ~ 11 ~
______________________________________________________________________

Measures of dispersion

These measures show the degree of variability of the values from a random
variable with respect to the mean value considered as the representative one for the
distribution. The main measures are the variance and the standard deviation.

VARIANCE

Let consider g(X)=(X-)2. The variance is defined as the moment with respect
to the mean of order 2 and it is denoted by Var(X), V(X) or 2:

x
xi
i 2 P X xi X discrete r.v.

Var X 2 2 E X 2

x f x dx
2
X continuous r.v.

Properties of the variance

1st) The variance cannot be negative

2nd) A change of origin in r.v. X does not affect its variance

Var X c Var X , c constant

3rd) A change of scale in r.v. X affects its variance

Var cX c 2 Var X , c constant

From properties 2 and 3, the variance of a linear combination of the r.v. X can be
determined as follows:

Var a b X b 2 Var X

4th) An abbreviated way of calculation for the variance of X is given by the expression
______________________________________________________________________
Degree in Economics / Estefana Mourelle Espasandn 2016/2017
Lesson 3. Random variables Statistics I ~ 12 ~
______________________________________________________________________

Var X 2 2 12 2 2

5th) The variance of the sum of independent random variables equals the sum of
variances.

General case:
Var(X+Y)= Var(X) + Var(Y) + 2cov (X,Y)
Var(X- Y)= Var(X) + Var(Y) - 2cov (X,Y)

Were the variables independent cov (X,Y)=0, so that

Var (X+Y)= Var(X) + Var(Y)


Var (X- Y)= Var(X) + Var(Y)

STANDARD DEVIATION

It is the variances square root with positive sign and it is denoted by :

2 E X 2

Properties of the standard deviation

1st) The standard deviation cannot be negative (by definition)

2nd) A change of origin in r.v. X does not affect its standard deviation

X c X , c constant

3rd) A change of scale in r.v. X affects its standard deviation

cX c X , c constant

4th)
2 12 2 2
______________________________________________________________________
Degree in Economics / Estefana Mourelle Espasandn 2016/2017
Lesson 3. Random variables Statistics I ~ 13 ~
______________________________________________________________________

STANDARDIZATION OF A RANDOM VARIABLE

The standardization is a procedure employed to homogenize variables which have


different mean and different variance, so as to make comparisons between them.

From r.v. X ~ D (,2), we obtain the standardized random variable Z by


applying the following linear transformation:

X
Z

Z is a dimensionless r.v. that has zero mean and unit variance.

3.1.5. Markovs theorem. Tchebychevs inequality

Let X be a random variable and g(X) a function of that variable, so that g(x) 0,
x. Then, for any constant c > 0 it is fulfilled:

E g X
Pg X c
c

E g X
In the same manner, Pg X c 1 .
c

Tchebychevs inequality

It is a particular case of Markovs theorem for g(X) = (X-)2 and c = k2 2 the


following expression is obtained:

P X k
1
k2

______________________________________________________________________
Degree in Economics / Estefana Mourelle Espasandn 2016/2017
Lesson 3. Random variables Statistics I ~ 14 ~
______________________________________________________________________

P X k 1
1
In the same manner, we have: .
k2

Tchebychevs inequality allows us for setting probability maximums and minimums


for intervals of the type [ k] (or of their complements) although the probability
distribution of X is not known.

By means of Tchebychevs inequality, we can approximately calculate the


probability in an interval, only having and .

______________________________________________________________________
Degree in Economics / Estefana Mourelle Espasandn 2016/2017
Lesson 3. Random variables Statistics I ~ 15 ~
______________________________________________________________________

3.2. TWO-DIMENSIONAL RANDOM VARIABLES

3.2.1. Introduction: relationship between variables

In a two-dimensional framework we have two one-dimension random variables and


we may be interested in studying their potential relationships (dependence); for doing
this, we need to define a two-dimensional random variable (X,Y).

The two-dimensional random variable on a sample space is defined as a function (X,Y),

X , Y : 2
that associates each elementary event with a pair of real numbers.

3.2.2. Cumulative probability distributions

The joint cumulative probability distribution F(x,y) for the random variable
(X,Y) is an appliccation 2 [0,1] defined as follows:

F(x,y) = P (X x; Y y), (x,y) 2

Properties

1. 0 F(x,y) 1, (x,y) 2

2. F(-,y) = P (X < -; Y y) = 0
x, y

F(x,-) = P (X x; Y < -) = 0

3. F(,) = P (X <; Y<) = 1

4. F is a non-decreasing monotonic function

x1 < x2 F(x1,y) F(x2, y), y y1 < y2 F(x,y1) F(x, y2), x

______________________________________________________________________
Degree in Economics / Estefana Mourelle Espasandn 2016/2017
Lesson 3. Random variables Statistics I ~ 16 ~
______________________________________________________________________

Marginal cumulative probability distribution

Given certain two-dimensional random variable (X,Y), the marginal cumulative


probability distribution for X is given by

P(X x; Y <) = F(x,) = F1(x)

and the marginal cumulative probability distribution for Y is

P(X <; Y y) = F(, y) = F2(y)

F1(x) and F2(y) are one-dimension cumulative probability distributions that indicate
the behaviour of each variable regardless of the behaviour of the other variable.

Conditional cumulative probability distribution

Given certain two-dimensional random variable (X,Y), the cumulative probability


distribution for X conditional to Y=y is given by

F(x|y) = P (X x|Y = y)

(Analogous for Y).

This function reflects the behaviour of one of the variables when the other is
subject to certain condition.

3.2.3. Discrete two-dimensional random variables: probability functions

A two-dimensional random variable is discrete when its components are


discrete one-dimensional random variables when the whole probability is spread
among a finite number or countable infinite number of points with positive probability.

Joint probability function: definition


The joint probability function for the random variable (X,Y) has the following
expression:
______________________________________________________________________
Degree in Economics / Estefana Mourelle Espasandn 2016/2017
Lesson 3. Random variables Statistics I ~ 17 ~
______________________________________________________________________

P(X=xi; Y=yj) = pij

Properties

1. 0 pij 1, i,j

2. p
i 1 j 1
ij 1

The joint cumulative probability distribution for a discrete random variable (X,Y)
is given by:

F(xi,yj) = P (X xi; Y yj) = P X x ; Y y


xr xi ys y j
r s

Conclusion: a discrete two-dimensional random variable is completely defined when


the set of potential values that it can take and their respective probabilities are
determined (by means of the joint cumulative probability distribution or the joint
probability function).

Marginal probability function

Let (X,Y) be a discrete random variable. The marginal probability function for X is

pi P X xi P X xi , Y y j
y j

The marginal probability function for Y is

p j P Y y j P X xi , Y y j .
xi

We obtain the marginal cumulative probability functions for (X,Y):

F1 xi PX x ;Y y ;
xr xi y j
r j P X xi ; Y y s
F2 y j
xi ys y j

______________________________________________________________________
Degree in Economics / Estefana Mourelle Espasandn 2016/2017
Lesson 3. Random variables Statistics I ~ 18 ~
______________________________________________________________________

Conditional probability function

Be (X,Y) a discrete random variable: the conditional probability function for X, given
the value yj with PY(yj) > 0, is


P X xi , Y y j

P xi | y j P X xi | Y y j
P Y yj
The conditional probability function for Y, given the value xi with PX(xi) > 0, is


P X xi , Y y j

P y j | xi P Y y j | X xi P X xi

The expressions for the conditional cumulative probability distributions for (X,Y) are:

P X x , Y y r j


F xi | y j P X xi | Y y j x r xi

P Y yj

P X x , Y y i s


F y j | xi P Y y j | X xi ys y j

P X xi

3.2.4. Continuous two-dimensional random variable: density functions

A two-dimensional random variable is continuous if the one-dimensional


variables that integrate them are also continuous is that one that takes a
non-countable infinite number of values.

There is no punctual probability in this context,

P(X=x, Y=y) = 0, (x,y) 2

so that we talk about probability in an infinitesimal rectangle,

P(x < X x + x, y < Y y + y), x 0, y 0.


______________________________________________________________________
Degree in Economics / Estefana Mourelle Espasandn 2016/2017
Lesson 3. Random variables Statistics I ~ 19 ~
______________________________________________________________________

Joint density function: definition

The joint density function, f x, y , for a random variable is the derivative with

to respect to X and Y of the joint cumulative probability distribution:

Px X x x, y Y y y 2 F x, y
lim f x, y
x 0, y 0 x y x y

The joint cumulative probability distribution is given by

x y

F(x,y) = P (X x, Y y) = f u, v du dv

Properties of the joint density function

1. Conditons a function f x, y must verify so as to be a joint density function of a

continuous random variable:

i. f x, y 0, (x,y) 2

ii. f x, y dx dy 1

f x, y dx dy
b d
2. P (a X b, c Y d) =
a c

Conclusion: a two-dimensional random variable is completely defined when we


determine the interval of values that are characterizing it, as well as its joint
cumulative probability distribution or its joint density function.

Marginal density function

Given a continuous variable (X,Y), the marginal density function for X is

______________________________________________________________________
Degree in Economics / Estefana Mourelle Espasandn 2016/2017
Lesson 3. Random variables Statistics I ~ 20 ~
______________________________________________________________________

f1 x f x, y dy

and the marginal density function for Y is

f 2 y f x, y dx

.

From these functions we calculate the corresponding marginal cumulative


probability distributions, being the one for X


F1 x P X x, Y f u, v du dv f1 u du ,
x x

and the one for Y,


F2 y P X , Y y f u, v du dv f 2 v dv .
y y

Conditional density function

The expressions for the conditional density functions are:

f x, y
f x | y
For X f2 y (condition: y < Y < y+y)

f x, y
f y | x
For Y f1 x (condition: x < X < x+x)

The conditional cumulative probability distributions are given by:

F x | y lim P X x | y Y y y f u | y du
x

y 0

F y | x lim P Y y | x X x x f v | x dv
y

x0

______________________________________________________________________
Degree in Economics / Estefana Mourelle Espasandn 2016/2017
Lesson 3. Random variables Statistics I ~ 21 ~
______________________________________________________________________

3.2.5. Independence of random variables

Sometimes, knowing in advance the behaviour of a random variable X does not


influence the behaviour of the other one, Y; in this case, X and Y are said to be
independent random variables.

Two random variables, X and Y, are independent if and only if:


- Discrete case: P X xi , Y y j P X xi P Y y j pi p j , xi, yj

- Continuous case: f x, y f1 x f 2 y

In an alternative way, we can prove the independence property by means of the


conditional probability, so that there is independence if and only if:

- Discrete case:
P X xi | Y y j P X xi ,
P Y y j | X x i P Y y j , x i , yj

- Continuous case: f x | y f1 x , f y | x f 2 y .

3.2.6. Characteristics of the two-dimensional random variables

JOINT TWO-DIMENSIONAL MOMENTS. COVARIANCE

The mathematical expectation or expected value for g(X,Y), function of the


two-dimensional random variable (X,Y), is defined as

g x , y PX x , Y y
xi y j
i j i j (X,Y) discrete r.v.

Eg X ,Y

g x, y f x, y dx dy (X,Y) continuous r.v.

______________________________________________________________________
Degree in Economics / Estefana Mourelle Espasandn 2016/2017
Lesson 3. Random variables Statistics I ~ 22 ~
______________________________________________________________________

Particular cases of interest

1st) Two-dimensional moments with respect to the origin

The two-dimensional momento with respect to the origin of order (r,s),


rs, for the r.v. (X,Y) is obtained when g(X,Y)=Xr . Ys is considered:

x r
i
y sj P X xi , Y y j (X,Y) discrete r.v.
xi y j


rs E X r Y s

x r y s f x, y dx dy (X,Y) continuous r.v.

Property

If r=0 or s=0, the moments become marginal moments, that is, they can be
calculated from the corresponding marginal distributions of the variable.

2nd) Two-dimensional moments with respect to the mean

If g(X,Y)=[X-E(X)]r.[Y-E(Y)]s is considered, the two-dimensional moment with


respect to the mean of order (r,s), rs, for the random variable (X,Y) is defined as:

x y
xi y j
i X
r
j Y s P X xi , Y y j


rs E X X r Y Y s (X,Y) discrete r.v.


x y f x, y dx dy
r s
X Y

(X,Y) continuous r.v.

______________________________________________________________________
Degree in Economics / Estefana Mourelle Espasandn 2016/2017
Lesson 3. Random variables Statistics I ~ 23 ~
______________________________________________________________________

Property

If r=0 or s=0, the moments become marginal moments, that is, they can be
calculated from the corresponding marginal distributions of the variable.

COVARIANCE (XY)

Covariance is the two-dimensional moment with respect to the mean of


order (1,1), 11:

x y
xi y j
i X j Y P X xi , Y y j

Cov X ,Y XY 11 (X,Y) discrete r.v.


x X y Y f x, y dx dy

(X,Y) continuous r.v.

The covariance measures the association between the values of X and Y.

Properties of the covariance

1st) A change of origin in the r.v. (X,Y) does not affect its covariance

Cov (X+a,Y+b) = Cov (X,Y) , a, b constant

2nd) A change of scale in the r.v. (X,Y) affects its covariance

Cov (a.X, b.Y) = a. b. Cov (X,Y) , a, b constant

Properties 2 and 3 determine the covariance for the linear combination of variables X and Y:

Cov (a.X+c, b.Y+d)= a. b. Cov (X,Y)


, a, b constant
3rd) Abbreviated formula for calculating the covariance:
______________________________________________________________________
Degree in Economics / Estefana Mourelle Espasandn 2016/2017
Lesson 3. Random variables Statistics I ~ 24 ~
______________________________________________________________________

Cov X , Y XY 11 10 01

Interpretation of the sign of cov (X,Y)

The sign of the covariance of two rr.vv. reflects the direction of the linear relationship
that might exist between them:

Positive sign: positive relationship between X and Y

Zero: no linear relationship between X and Y


XY E X X Y Y

Negative sign: negative relationship between X and Y

GENERAL PROPERTIES FOR THE EXPECTATION AND THE VARIANCE

Be h1(X) and h2(Y) two functions of the rr.vv. X and Y, respectively.

1st) E [h1(X) + h2(Y)] = E [h1(X)] + E [h2(Y)]

E [h1(X) - h2(Y)] = E [h1(X)] - E [h2(Y)]

Particular case:
E (X + Y) = E(X) + E(Y)

E (X - Y) = E(X) - E(Y)

2nd) If X and Y are independent:

E [h1(X) . h2(Y)] = E [h1(X)] . E [h2(Y)]

Particular case:
E (X . Y) = E(X) . E(X|Y) , h1(X)=X, h2(Y)=Y

3rd) We had already seen the variance for the sum and for the difference of two
random variables,
Var (X + Y) = Var (X) + Var (Y) + 2 . Cov (X,Y)

Var (X - Y) = Var (X) + Var (Y) - 2 . Cov (X,Y)


______________________________________________________________________
Degree in Economics / Estefana Mourelle Espasandn 2016/2017
Lesson 3. Random variables Statistics I ~ 25 ~
______________________________________________________________________

4th) If X and Y are statistically independent rr.vv., it is verified that cov


(X,Y)=0.

5th) If X and Y are statistically independent rr.v., we have that:

Var (X + Y) = Var (X) + Var (Y)

Var (X - Y) = Var (X) + Var (Y)

Important: Independent rr.v. null covariance

Null covariance independent rr.vv.

CONDITIONAL TWO-DIMENSIONAL MOMENTS

Given the random variable (X,Y), the mathematical expectation for the r.v. X
conditional to certain value for Y (Y=y) is defined as:

x P X x | Y y
xi
i i (X,Y) discrete r.v.

E X | Y y

x f x | Y y dx (X,Y) continuous r.v.

The conditional expectation indicates which is the expected value for a variable given
the value for the other variable.

Property

If the variables X and Y are statistically independent it is satisfied that:

E X | Y y E X E Y | X x E Y

______________________________________________________________________
Degree in Economics / Estefana Mourelle Espasandn 2016/2017

Vous aimerez peut-être aussi