Vous êtes sur la page 1sur 17

Probability and Statistics

B Madhav Reddy

madhav.b@srmap.edu.in

B Madhav Reddy Probability and Statistics 1 / 17


Definition
If X is a discrete random variable having a p.m.f p(x), then the
expectation, or the expected value, of X , denoted by E [X ], is defined by

E [X ] = ∑ xp(x)
x ∶p(x )>0

Fact 7. If X is a discrete random variable that takes on one of the values


xi , i ≥ 1, with respective probabilities p(xi ), then, for any real-valued
function g,
E [g(X )] = ∑ g(xi )p(xi )
i

Fact 8. If a and b are constants, then

E [aX + b] = aE [X ] + b

B Madhav Reddy Probability and Statistics 2 / 17


● The probability concept of expectation is analogous to the physical
concept of the center of gravity of a distribution of mass
Definition
Given a random variable X , its expected value, E [X ], is also referred to as
the mean or the first moment of X .
The quantity E [X n ], n ≥ 1, is called the nth moment of X

Given a random variable X and its distribution function F , our aim is


to summarize the essential properties of F by some measures
One such measure is the expectation or mean E [X ]
But E [X ] being the weighted average of the possible values of X and
hence does not tell us anything about spread or variation of the
values of X

B Madhav Reddy Probability and Statistics 3 / 17


Example: Consider the following three random variables

W = 0 with probability 1


⎪ −1 with probability 2
1
Y =⎨

⎪ 1
⎩ +1 with probability 2


⎪ −100 with probability
1
Z =⎨ 2

⎪ 1
⎩ +100 with probability 2

Calculate E [W ], E [Y ] and E [Z ]
We get E [W ] = E [Y ] = E [Z ] = 0, but the values of Z are more spread
from 0 compared to that of Y whose values in turn are more spread
from 0 compared to the values of W .
Now that we measured mean, we will try to measure the spread of random
variable X from its mean E [X ] = µ (say), i.e., ∣X − µ∣

B Madhav Reddy Probability and Statistics 4 / 17


Instead of measuring ∣X − µ∣ for each value of the random variable X ,
we will look at the average or expectation of this spread from the
mean
That is, we will look at E [∣X − µ∣]
Mathematically, ∣ ⋅ ∣ is not so convenient to handle, so we will look at
a similar quantity (X − µ)2
Thus, in order to measure the spread of the values of random variable
X from its mean, we will calculate E [(X − µ)2 ]

Definition
If X is a random variable with mean µ, then the variance of X , denoted
by Var(X ), is defined by

Var(X ) = E [(X − µ)2 ]

B Madhav Reddy Probability and Statistics 5 / 17


Fact 9. Var(X ) = E [X 2 ] − (E [X ])2
Proof.

Var(X ) = E [(X − µ)2 ]


= ∑(x − µ)2 p(x)
x
= ∑(x 2 − 2µx + µ2 )p(x)
x
= ∑ x 2 p(x) − 2µ ∑ xp(x) + µ2 ∑ p(x)
x x x
= E [X ] − 2µ + µ
2 2 2

= E [X 2 ] − µ2
Ô⇒ Var(X ) = E [X 2 ] − (E [X ])2

B Madhav Reddy Probability and Statistics 6 / 17


Example: Calculate Var(X ) if X represents the outcome when a fair die is
rolled.
We already saw that E [X ] = 7
2
Now, E [X 2 ] =??

1 1 1 1 1 1 91
E [X 2 ] = 12 ( ) + 22 ( ) + 32 ( ) + 42 ( ) + 52 ( ) + 62 ( ) =
6 6 6 6 6 6 6

91 7 2 35
Thus, Var(X ) = −( ) =
6 2 12

B Madhav Reddy Probability and Statistics 7 / 17


Problem: A box contains 5 red and 5 blue marbles. Two marbles are
drawn randomly. If they are the same color, then you win $1.10; if they are
different colors, then you win −$1.00. (That is, you lose $1.00). Calculate
(a) the expected value of the amount you win;
(b) the variance of the amount you win.
Solution: Let X be the amount of winnings.
X can take values 1.10 and −1.00
The probability of X = 1.10 is the probability of drawing 2 balls of the
(21)⋅(52) 4
same color, which is (10 )
= 9
2

(a) Thus, P{X = −1} = 1 − 94 = 5


9

Therefore, E [X ] = (1.10) ( 94 ) − (1) ( 95 ) = − 15


1

So you are expected to lose money!

B Madhav Reddy Probability and Statistics 8 / 17


(b) We will use the formula Var(X ) = E [X 2 ] − (E [X ])2
Now, E [X 2 ] = (1.10)2 ( 94 ) + (1)2 ( 95 )

Hence, Var(X ) = (1.10)2 ( 94 ) + (1)2 ( 59 ) − ( 15 ) ≈ 1.089


1 2

B Madhav Reddy Probability and Statistics 9 / 17


Fact 10. For any constants a and b,

Var(aX + b) = a2 Var(X )

Proof. Let µ = E [X ] and by Fact 8, E [aX + b] = aµ + b. Therefore,

Var(aX + b) = E [(aX + b − aµ − b)2 ]


= E [a2 (X − µ)2 ]
= a2 E [(X − µ)2 ]
= a2 Var(X )

Definition
The square root of the Var(X ) is called the Standard deviation of X ,
and we denote it by SD(X ). That is,

SD(X ) = Var(X )

B Madhav Reddy Probability and Statistics 10 / 17


Expectation/mean ⇐⇒ center of gravity
Variance ⇐⇒ moment of inertia

Problem: Let X be a discrete random variable. If E [X ] = 1 and


Var(X ) = 5, find
(a) E [(2 + X )2 ]
(b) Var(4 + 3X )
Solution: (a)

E [(2 + X )2 ] = E [4 + 4X + X 2 ]
= 4 + 4E [X ] + E [X 2 ]

Now, E [X ]2 = Var(X ) + (E [X ])2 = 5 + 12 = 6


Thus, E [(2 + X )2 ] = 4 + (4 × 1) + 6 = 14
(b) Var(4 + 3X ) = 32 Var(X ) = 45

B Madhav Reddy Probability and Statistics 11 / 17


Suppose that a trial, or an experiment, whose outcome can be
classified as either a success or failure is performed
X be the random variable which takes 1 when the outcome is success
and 0 when the outcome is a failure
The probability mass function of X is given by
p(0) = P{X = 0} = 1 − p
p(1) = P{X = 1} = p
where p, 0 ≤ p ≤ 1 is the probability that the trial is a success.

Definition
A random variable X is said to be a Bernoulli random variable if it takes
only two values 0, 1 and its probability mass function is given by

p(0) = P{X = 0} = 1 − p
p(1) = P{X = 0} = p

for some p ∈ (0, 1)


B Madhav Reddy Probability and Statistics 12 / 17
If we term heads as success, then tossing a coin which lands on heads
with probability p is a standard example of Bernoulli random variable

Definition
Suppose now that n independent trials, each of which results in a success
with probability p or in a failure with probability 1 − p, are to be performed.
If X denotes the number of successes that occur in the n trials, then X is
said to be a binomial random variable with parameters (n, p).

Thus, a Bernoulli random variable is just a binomial random variable


with parameters (1, p)

B Madhav Reddy Probability and Statistics 13 / 17


If X is a binomial random variable with parameters (n, p), then the
p.m.f of X is given by
n
p(k) = ( )p k (1 − p)n−k , k = 0, 1, 2, . . . , n
k
By binomial theorem,
∞ n n k
∑ p(k) = ∑ ( )p (1 − p) = [p + (1 − p)]n = 1
n−k
k=0 k=0 k

B Madhav Reddy Probability and Statistics 14 / 17


Example: Five fair coins are flipped. If the outcomes are assumed
independent, find the probability mass function of the number of heads
obtained.
Solution: Let X be the number of heads (successes) that appear.
Then X is a binomial random variable with parameters (n = 5, p = 12 ).
Thus, the p.m.f of X is given by
5
P{X = k} = p(k) = ( )p k (1 − p)5−k , k = 0, 1, 2, 3, 4, 5
k

Thus, we get the values of P{X = 0}, P{X = 1}, P{X = 2}, P{X = 3}, and
P{X = 4}, P{X = 5} by simply substituting the corresponding values of k.
(Do not leave like this in the exam!)

B Madhav Reddy Probability and Statistics 15 / 17


Properties of Binomial Random Variables
Let X ∼ Bin(n,p), i.e, X is a binomial random variable.

n
n
E [X k ] = ∑ i k ( )p i (1 − p)n−i
i=0 i
n
n
= ∑ i k ( )p i (1 − p)n−i
i=1 i

Exercise: i (ni) = n (n−1)


i−1 for i ≥ 1
n
n − 1 i−1
E [X k ] = np ∑ i k−1 ( )p (1 − p)n−i
i=1 i − 1
n−1
n−1 j
= np ∑ (j + 1)k−1 ( )p (1 − p)n−1−j (by setting j = i − 1)
j=0 j
= npE [(Y + 1)k−1 ]
where Y ∼ Bin(n-1,p)
B Madhav Reddy Probability and Statistics 16 / 17
Thus, E [X k ] = npE [(Y + 1)k−1 ] for k ≥ 1 and Y ∼ Bin(n-1,p)
In particular, k = 1, E [X ] = npE [1] = np
Further, when k = 2, E [X 2 ] = npE [Y + 1] where Y ∼ Bin(n-1,p)
Ô⇒ E [X 2 ] = np [E [Y ] + 1] = np [(n − 1)p + 1]

Thus, Var(X ) = E [X 2 ] − (E [X ])2


= np [(n − 1)p + 1] − (np)2
= np(1 − p)

For a binomial random variable X with parameters (n, p), we have

E [X ] = np

Var(X ) = np(1 − p)
E [X k ] = npE [(Y + 1)k−1 ] where Y ∼ Bin(n-1,p)

B Madhav Reddy Probability and Statistics 17 / 17

Vous aimerez peut-être aussi