Vous êtes sur la page 1sur 4

Confidence intervals and hypotheses testing for AP Stats

A. One population
I. Ideal case: normal population, known σ X
Sample data: We have independent normal variables X 1 ,..., X n sampled from the same normal
population with mean µ X (unknown) and standard deviation σ X (known).

Task 1: Derive a two-tail confidence interval with significance level α for


the sample mean
Remember: The number of tails and significance level depend on the problem considered
Step 1. The sample mean will be standardized, therefore we start with the confidence interval
for the standard normal variable z :
 
P −z α < z < z α  = 1−α. (1)
 2 2

Remember: to avoid errors, always graph the areas.


Step 2. Standardize the sample mean.
Theorem. If X 1 ,..., X n are independent normal, then the sample mean X is normal.

X −µ X
By this theorem the standardized variable z = is standard normal (it normal as a linear
σX
transformation of a normal variable, has mean zero and variance 1). We also know that
V ( X ) σX
µ X = EX = EX = µ X , σ X = V ( X ) = = .
n n
Therefore z becomes
X −µ X
z= . (2)
σX / n

Step 3. Obtain a confidence interval for the difference X −µ X .


Plug (2) in (1):
 X −µ X 
P −z α < < z α  = 1−α.
 2 σ X / n 2

Multiplying the inequality by σ X / n > 0 preserves inequality signs:

1
 σ σ 
P −z α X < X −µ X < z α X  = 1−α. (3)
 2 n 2 n 
σX
Definition. The margin of error is defined by ME = z α .
2 n
Hence, (3) becomes
P (−ME < X −µ X < ME ) = 1−α. (3’)

Step 4. Obtain and interpret confidence intervals for the population and sample means.
Question 1 (inference = conclusion about population made on sample information) What can
be said about µ X if X and σ X are known?
Answer 1. From (3’) we have a confidence interval for the population mean:
P ( X − ME < µ X < X + ME ) = 1−α. (4)

Question 2 (prediction) If µ X , σ X are known, then where X is likely to be?


Answer 2. From (3’) we get
P (µ X − ME < X < µ X + ME ) = 1−α. (5)

1st interpretation of (5): We are (1−α )% sure that

µ X − ME < X < µ X + ME. (6)

2nd interpretation of (5): the probability that the distance between X and µ X is less than ME is
high ( P (| X −µ X |< ME ) = 1−α ).

3rd interpretation of (5) (using the complement rule): it is not likely that the distance between
X and µ X exceeds ME:

P (| X −µ X |≥ ME ) = α . (7)

Task 2: Use the confidence interval for hypothesis testing


Step 1. Formulate the null and alternative hypotheses and choose the level of significance
We want to test H 0 : µ X = µ 0X against H a : µ X ≠ µ 0X at the level of significance α . For such
hypotheses, we should use a two-tail test and the probability of each tail is α / 2 .
Consequences of sampling variability. X is random. Even though EX = µ 0X under the null, the
realized value of X may not be equal to µ 0X because of sampling variability. It may be close to
µ 0X in the sense that

| X −µ 0X |< ME (8)
2
or far from µ 0X in the sense that

| X −µ 0X |≥ ME. (9)

Step 2. Discuss the cases consistent with H 0 and H a and formulate the decision rule.
(A) Supposed the realized statistic satisfies (8). By (5) the event (8) has high probability. Since
this is a likely event under the null, we don’t have sufficient evidence against the null and
cannot reject it.
(B) From (7) we see that the event (9) is not likely to occur under the null. Hence, in case (9) we
should reject the null. Our decision may be wrong (the confidence interval is derived under the
null and rejecting the null is Type I error) and (7) gives the probability of this error:
P (Type I error) = α . (10)
Remember: the statistical theory is good only for evaluating the probability of Type I error.
Decision rule. In case (8) we fail to reject H 0 . In case (9) we reject H 0 , and we know (10).

X −µ 0X
Decision rule (alternative formulation). The statistic z = is called z-score. In case
σX / n
| z |< zα /2 we fail to reject the null. In case | z |≥ zα /2 we reject the null.
Ex. 8.13 (confidence interval)
Ex. 10.11 (hypothesis testing)

Task 3: Use p-values instead of other statistics


Suppose (9) is true with some α . The value of X may be so far from µ 0X that (9) would be true
with a lower α1 < α . Then we would be able to reject the null at this lower α1 and by (10) the
probability of Type I error would be lower (with the same data). This prompts us to look for the
least possible level of significance satisfying (9).
Definition. P-value is the lowest level of significance at which it is still possible to reject the null.
Alternatively, p-value is the least α at which (9) is true. Mathematically, take the realized X
and define p by P (| X −µ X |< ME ) = 1− p .

Interpretation of p-value. At every α ≥ p the null is rejected. At every α < p we fail to reject
the null.
Remember: for each statistic, you can define its own p-value following the same logic.

II. Less than ideal case: normal population, unknown σ X


Independent normal variables X 1 ,..., X n sampled from the same normal population with mean
µ X (unknown) and standard deviation σ X (also unknown).

3
Ex. Give the definition and state the properties of t distribution.
Ex. 8.24 (confidence interval)
Ex. 10.16 (hypothesis testing, one-tail test, p-value)

III. Even worse case: non-normal population, unknown σ X ,


large sample
Independent variables X 1 ,..., X n sampled from the same non-normal population with mean µ X
(unknown) and standard deviation σ X (also unknown).
Remember: In Case I use z statistic, in Case II use t statistic (in these two cases the sample size
doesn’t matter); in Case II assume large sample size, apply CLT and use z statistic.

IV. Special case of III: Bernoulli population, unknown p , large


sample
Independent variables X 1 ,..., X n sampled from the same Bernoulli population with the
population proportion p unknown.
Remember: because n is large, z score is used
Ex. 8.24 (confidence interval)
Ex. 10.30 (hypothesis testing, one-tail test)

Two populations − comparison of means


V. Matched pairs: data come in pairs, unknown σD
Data come in pairs ( X 1 , Y1 ),..., ( X n , Yn ) , pairs are independent and normal, the t stats for differences
Di = X i − Yi can be used.
Ex. 9.4 (confidence interval)
Ex. 11.3 (hypothesis testing, two-tail test)

VI. Two independent samples, normal populations, unknown


variances
Two samples of different sizes ( X 1 ,..., X nX ) , (Y1 ,..., YnY ) , each sample comes from a normal
population, independence within and between samples, differences Di = X i − Yi don’t make sense.

Ex. 9.11 (confidence interval; use the conservative formula for the degrees of freedom)
Ex. 11.4 (hypothesis testing, one-tail test)

Vous aimerez peut-être aussi