Vous êtes sur la page 1sur 2

Bias, Variance, and MSE of Estimators

Guy Lebanon September 4, 2010


We assume that we have iid (independent identically distributed) samples X (1) , . . . , X (n) that follow some unknown distribution. The task of statistics is to estimate properties of the unknown distribution. In this note we focus one estimating a parameter of the distribution such as the mean or variance. In some cases the parameter completely characterizes the distribution and estimating it provides a probability estimate. In this note, we assume that the parameter is a real vector Rd . To estimate it, we use an es(x(1) , . . . , x(n) ). We follow standard practice and omit timator which is a function of our observations . However, note that (in notation only) the dependency of the estimator on the samples, i.e. we write (1) ( n ) = (X , . . . , X ) is a random variable since it is a function of n random variables. A desirable property of an estimator is that it is correct on average. That is, if there are repeated (X (1) , . . . , X (n) ) will have, on average, the correct samplings of n samples X (1) , . . . , X (n) , the estimator value. Such estimators are called unbiased. is1 Bias( ) = E( ) . If it is 0, the estimator is said to be unbiased. Denition 1. The bias of There is, however, more important performance characterizations for an estimator than just being unbiased. The mean squared error is perhaps the most important of them. It captures the error that the estimator makes. However, since the estimator is a RV, we need to average over its distribution thus capturing the average performance if there are many repeated samplings of X (1) , . . . , X (n) . 2 ) = E( Denition 2. The mean squared error (MSE) of an estimator is E( Theorem 1. 2 ) = trace(Var( )) + Bias( ) 2 . E( ) is the covariance matrix of and so its trace is Note that Var( Proof. Since the MSE equals ) + Bias2 ( ): Var(
d j =1 d j =1 d j =1 (j

j )2 ).

j ). Var(

j j )2 ) it is sucient to prove for a scalar , E(( )2 ) = E((

)2 ) = E((( E( )) + (E( ) )2 ) = E{( E( ))2 + (E( ) )2 + ( E( ))(E( ) )} E(( ) + Bias2 ( ) + E(( E( ))(E( ) )) = Var( ) + Bias2 ( ) + E( E( ) (E( ))2 + E( )) = Var( ) + Bias2 ( ) + (E( ))2 (E( ))2 E( ) + E( ) = Var( ) + Bias2 ( ). = Var(

Since the MSE decomposes into a sum of the bias and variance of the estimator, both quantities are important and need to be as small as possible to achieve good estimation performance. It is common to trade-o some increase in bias for a larger decrease in the variance and vice-verse.
1 Note

here and in the sequel all expectations are with respect to X (1) , . . . , X (n) .

= X = Two important special cases are the mean = S2, where


2 Sj =

1 n

X (i) which estimates the vector E(X ) and


(i)

1 n1

j )2 (Xj X
i=1

j = 1, . . . , d

which estimates the diagonal of the covariance matrix Var(X ). We show below that both are unbiased and therefore their MSE is simply their variance. is an unbiased estimator of E(X ) and S 2 is an unbiased estimator of the diagonal of the Theorem 2. X covariance matrix Var(X ). Proof.
n n

) = E n 1 E( X
i=1 2

X (i)

=
i=1

E(X (i) )/n = nE(X (i) )/n.

To prove that S is unbiased we show that it is unbiased in the one dimensional case i.e., X, S 2 are scalars (if this holds, we can apply this result to each component separately to get unbiasedness of the vector S 2 ). We rst need the following result (recall that below X is a scalar)
n

)2 = (X (i) X
i=1

(X (i) )2 2X

2 = X (i) + nX

X + nX 2 = (X (i) )2 2nX

2 (X (i) )2 nX

and therefore
n n n

E
i=1

)2 (X (i) X

=E
i=1

2 (X (i) )2 nX
2

=
i=1

2 ) = nE((X (1) )2 ) nE(X 2 ). E((X (i) )2 ) nE(X

2 ) = Var(X )+(E(X ))2 = Substituting the expectations E(X ) = Var(X )+(E(X ))2 = Var(X )+(EX )2 and E(X Var(X ) + (EX )2 in the above equation we have n
n

E
i=1

)2 (X (i) X

= n(Var(X ) + (EX )2 ) n

Var(X ) + (EX )2 n
(i) n i=1 (Xj

= (n 1)Var(X ).

2 Returning now to the multivariate case, this implies E(Sj ) = E( 2 all j and therefore ES = diag(Var(X )).

j )2 /(n 1)) = Var(Xj ) for X

as an estimator of EX is d Var(X j ) = d n2 Var(Xj ). Thus, Since the bias is zero, the MSE of X j =1 j =1 n 2 2 ) 0. For S , the MSE is trace(Var(S )) which may be computed with if d is xed and n the MSE(X some tedious algebra (it also decreases to 0 as n ). is the probability that the estimators error is outside Another performance measure for estimators some acceptable error range P ( ). However, to evaluate the above quantity, we need (i) the pdf f which depends on the pdf of X (which is typically unknown) and (ii) the true value (also typically is unbiased we may obtain the following bound unknown). If
d d d

P(

)=P
j =1

j j |2 |
j =1

j j |2 /d) = P (|
j =1

j j | P (|

/d)

j ) = Var(
j =1

)) trace(Var(

where we used Booles and Chebyshevs inequalities. This again shows (but in a dierent way than the bias )). variance decomposition of the MSE) that the quality of unbiased estimators is determined by trace(Var( 2

Vous aimerez peut-être aussi