Académique Documents
Professionnel Documents
Culture Documents
1/28
Random Variable
! Definition
Numerical characterization of outcome of a random
event
!Examples
1) Number on rolled dice
2) Temperature at specified time of day
3) Stock Market at close
4) Height of wheel going over a rocky road
2/28
Random Variable
! Non-examples
1) Heads or Tails on coin
2) Red or Black ball from urn
3/28
Discrete RV
Die
Stocks
Continuous RV
Temperature
Wheel height
4/28
xo
xo +
xo +
p X ( x )dx
5/28
p X ( x) =
( x m ) 2 / 2 2
= Standard Deviation of RV
2 = Variance of RV X
X (Note:
> 0)
p x ( x) =
x 2 2 2
7/28
x=m
pX(x)
Small
Small Variability
(Small Uncertainty)
pX(x)
Large
Large Variability
(Large Uncertainty)
8/28
CLT applies
Guassian Noise
9/28
p XY ( x , y )
10/28
11/28
pY | X ( y | x ) = p X ( x )
0,
p X ( x) 0
otherwise
p XY ( x, y )
,
p X |Y ( x | y ) = pY ( y )
0,
x is held
fixed
pY ( y ) 0
otherwise
y is held
fixed
slice and
normalize
y is held fixed
This graph shows the Conditional PDF
Graph from B. P. Lathis book: Modern Digital & Analog Communication Systems
12/28
Independent RVs
Independence should be thought of as saying that:
neither RV impacts the other statistically thus, the
values that one will likely take should be irrelevant to the
value that the other has taken.
In other words: conditioning doesnt change the PDF!!!
pY | X = x ( y | x ) =
p XY ( x, y )
= pY ( y )
p X ( x)
p XY ( x, y )
p X |Y = y ( x | y ) =
= p X ( x)
pY ( y )
13/28
Independent
(zero mean)
Independent
(non-zero mean)
Contours of pXY(x,y).
x
y
x
Dependent
y
x
An Independent RV Result
RVs X & Y are independent if:
p XY ( x, y ) = p X ( x ) pY ( y )
Heres why:
p XY ( x, y ) p X ( x ) pY ( y )
pY | X = x ( y | x ) =
=
= pY ( y )
p X ( x)
p X ( x)
15/28
Characterizing RVs
!
N
x
i =1 i
Ni = # of scores of value Vi
n
N i (Total # of scores)
N =
i =1
This is called Data Analysis View
But it motivates the Data Modeling View
P(X = Vi)
Statistics
Probability
17/28
Data Modeling
E{ X } = xi PX ( xi )
n =1
Probability Function
E{ X } =
x p X ( x )dx
Notation: E{ X } = X
18/28
Probability Theory
Given a PDF Model
Describe how the
data will likely behave
E{ X } =
x p X ( x )dx
Law of Large
Numbers
1
Avg =
N
xi
i =1
Data
Dummy Variable
There is no DATA here!!!
The PDF models how
the data will likely behave
Variance of RV
There are similar Data vs. Theory Views here
But lets go right to the theory!!
2 = E{ X 2 }
= x 2 p X ( x )dx
20/28
Positively Correlated
21/28
C xy
( xi x )( yi y )
i =1
yy
xx
Positive Correlation
Best Friends
GPA
&
Starting Salary
1
=
N
yy
xx
Zero Correlation
i.e. uncorrelated
Complete Strangers
Height
&
$ in Pocket
xx
Negative Correlation
Worst Enemies
Student Loans
&
Parents Salary
22/28
XY = E{( X X )(Y Y )}
XY = ( x X )( y Y ) p XY ( x, y )dxdy
XY = {XY }
XY = X2 = Y2
XY = 0
23/28
If XY = E{( X X )(Y Y )} = 0
Then Say that X and Y are uncorrelated
If XY = E{( X X )(Y Y )} = 0
Then
E{ XY } = X Y
Called Correlation of X & Y
Implies
X & Y are
Uncorrelated
E{ XY }
f XY ( x, y )
= E{ X }E{Y }
= f X ( x ) fY ( y )
PDFs Separate
Means Separate
Uncorrelated
Independence
Correlation :
XY = E{( X X )(Y Y )}
E{XY }
Correlation Coefficient :
XY
XY
=
XY
1 XY 1
26/28
E {X 2 X 1 } E {X 2 X 2 }
T
R x = E{xx } =
"
"
E {X N X 1 } E {X N X 2 }
E {X 1 X N }
! E {X 2 X N }
#
"
! E {X N X N }
!
Covariance Matrix :
C x = E{(x x )( x x )T }
27/28
E{aX } = aE{ X }
E{ f ( X )} = f ( x ) p X ( x )dx
2 + 2 + 2
Y
XY
X
var{ X + Y } =
2 + 2 , if X & Y are uncorrelated
Y
X
var{aX } = a 2 X2
{(
)}
= E {( X + Y ) } where X = X X
= E {( X ) + (Y ) +2 X Y }
= E {( X ) }+ E {(Y ) }+ 2 E { X Y }
var{ X + Y } = E X + Y X Y
z
z
2
2
z z
= X2 + Y2 + 2 XY
z z
28/28