Académique Documents
Professionnel Documents
Culture Documents
Master of Biostatistics
By
Wudneh ketema
Id no 005/08
December 6, 2016
Debre Berhan, Ethiopia
CONTENTS
Table of Contents
Contents
Table of Content
1 INTRODUCTION
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
1
1
1
2
3 Bayesian point-estimation
2
2
3
5 Conclusion
Wudneh K.
INTRODUCTION
1.1
Background
Suppose X1 , ..., Xn are iid with PDF/PMF f (x). The point estimation problem seeks to find a
called an estimator, depending on the values of X1 , ..., Xn , which is a good guess, or
quantity ,
estimate, of the unknown . The choice of depends, not only on the data,but on the assumed
model and the definition of good. Initially, it seems obvious what should be meant by good
that is close to ,but as soon as one remembers that itself is unknown, the question of what
it means even for to be close to becomes uncertain.
It would be quite unreasonable to believe that one point estimate hits the unknown on
the nose. Indeed, one should expect that will miss by some positive amount.Therefore, in
it would be helpful to have some estimate of the amount by
addition to a point estimate ,
which will miss . The necessary information is encoded in the sampling distribution of
or with a
and we can summarize this by say, reporting the standard error or variance of ,
confidence interval.
2.1
Point estimators
In statistics, point estimation involves the use of sample data to calculate a single value (known
as a statistic) which is to serve as a best guess or best estimate of an unknown (fixed or random)
population parameter.
More formally, it is the application of a point estimator to the data.
In general, point estimation should be contrasted with interval estimation such interval estimates are typically either confidence intervals in the case of frequentist inference inference, or
credible intervals in the case of Bayesian inference.Point estimation depends on:
minimum-variance mean-unbiased estimator (MVUE)
minimizes the risk (expected loss) of the squared-error loss-function.
best linear unbiased estimator (BLUE)
minimum mean squared error (MMSE)
median-unbiased estimator, minimizes the risk of the absolute-error loss function
maximum likelihood (ML)
method of moments, generalized method of moments
Wudneh K.
2.2
Hypothesis testing
Unlike the point estimation problem which starts with a vague question like what is ?,the
hypothesis testing problem starts with a specific question like is equal to 0 ?,where 0 is some
specified value. The popular t-test problem is one in this general class. In this case,notice that
the goal is somewhat different than that of estimating an unknown parameter.The general set
up is to construct a decision rule, depending on data, by which one can decide if = 0 or
6= 0 . Often times this rule consists of taking a point estimate and comparing it with the
hypothesized value 0 if is too far from 0 conclude 6= 0 ,otherwise, conclude = 0 .
Bayesian point-estimation
Bayesian inference is typically based on the posterior distribution. Many Bayesian pointestimators are the posterior distributions statistics of central tendency, e.g., its mean, median,
or mode Posterior mean, which minimizes the (posterior) risk (expected loss) for a squared-error
loss function in Bayesian estimation, the risk is defined in terms of the posterior distribution,
as observed by Gauss. Posterior median, which minimizes the posterior risk for the absolutevalue loss function, as observed by Laplace. maximum a posteriori (MAP), which finds a
maximum of the posterior distribution; for a uniform prior probability, the MAP estimator
coincides with the maximum-likelihood estimator, The MAP estimator has good asymptotic
properties, even for many difficult problems, on which the maximum-likelihood estimator has
difficulties. For regular problems, where the maximum-likelihood estimator is consistent, the
maximum-likelihood estimator ultimately agrees with the MAP estimator. Bayesian estimators
are admissible by Walds theorem.
4.1
Finite-sample properties
= n xi is estimator of
P
2
2
n2 = 12 (Xi x
n ) is estimator of
Wudneh K.
1
n 2
n2
= n1 2
4.2
large-sample properties
It can be difficult to compute MSE, risk functions, etc. for some estimators,especially when
estimator does not resemble a sample average.
Large-sample properties: exploit LLN, CLT
we say that Wn is consistent for a parameter iff the random sequence Wn converges (in some
stochastic sense) to . Strong consistency obtains when Wn as Weak consistency obtains
when Wn p
Consistency
An estimator is consistent if the estimate it constructs is guaranteed to converge to the true
parameter value as the quantity of data.Consistency is nearly always a desirable property for
a statistical estimator. now from this p
Wudneh K.
Variance (and efficiency) It is useful to quantify this notion of reliability using a natural
. All else being equal, an estimator
statistical metric: the variance of the estimator, V ar()
with smaller variance is preferable to one with greater variance. This idea, combined with abit
more simple algebra, quantitatively explains the intuition that more data is better for Estimator 1
= var( m )
var()
n
= n12 var(m)
Since m is binomially distributed, and the variance of the binomial distribution is n(1 )
so we have
var( = (1)
n
So variance is inversely proportional to the sample size n, which means that relative frequency
estimation is more reliable when used with larger samples.
Wudneh K.
Wudneh K.
5 CONCLUSION
Conclusion
We conclude from the above explanation there are two kinds of inference problem these are
point estimator and hypothesis testing problem . Point estimator is a type of estimator which
is used to find the point value of an estimator .Hypothesis testing is a statistical test which
is used to make decsion of some thing and there are different asymptotic properties of point
estimator , these are unbiased,consistency,efficiency ,sufficiency and MLE.
Wudneh K.
REFERENCES
References
[1] Kim, Jae-Young (2010): Large Sample Properties of Posterior Densities, Bayesian Information Criterion and the Likelihood Principle in Nonstationary Time Series Models.
[2] Koop, Gary (2009): Bayesian Econometrics, John Wiley Sons).
[3] Greenberg, Edward (2012): Introduction to Bayesian econometrics, Cambridge University
Press.
Wudneh K.