Académique Documents
Professionnel Documents
Culture Documents
Author:
t Gabor
Berna
Contents
Contents
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
5
5
6
7
8
8
9
9
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
11
11
11
13
13
13
17
17
19
20
21
.
.
.
.
.
25
25
25
26
28
29
Introduction to Probability
1
The need for probability
2
Probability basics . . . .
3
Probability distributions
4
Long running averages . .
5
Sampling distribution . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Contents
Confidence interval
1
Confidence intervals with proportions .
2
Sample size for estimating a proportion
3
Confidence intervals for means . . . . . .
4
Robustness for confidence intervals . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
31
32
33
34
35
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
37
37
37
38
38
39
39
40
41
42
Tests,
1
2
3
45
46
47
48
Linear
1
2
3
Regression
Regression Coefficients, Residuals and Variances . . . . . . . . . . . .
Regression Inference and Limitations . . . . . . . . . . . . . . . . . . .
Residual Analysis and Transformations . . . . . . . . . . . . . . . . .
51
52
53
54
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
55
55
56
57
57
Data categorization
Quantative variables
One way of making sense of a quantitive variable is to use the five number summary.
Given a collection of a observations we can calculate the:
Minimum is the lowest observation value.
Maximum is the highest observation value.
Median is the center observation, average point. To find it youll need to sort
the observations, and then take the observation in the middle position.
First quartile is the observation value at the 14 rd position in the sorted observation array.
Quantative variables
Third quartile is the observation value at the 34 rd position in the sorted observation array.
A graphical representation of this five values is possible via the boxplot as shown
on figure 1. On the boxplot the whiskers show the minimum and the maximum
values.
Boxplot
3rd quartile
1st quartile
50
55
60
65
70
Mean
Interquartile range
75
80
Maximum
Minimum
2.1
Modified boxplots
Contents
Now the lower whisker is noted as the lower fence, while the upper fence as
the upper whisker. Observations smaller than the lower fence, or larger than the
upper fence are drawn with their own circle on the plot as shown on figure 2.
A modified boxplot
Maximum
Interquartile range
20
Upper fence
3rd quartile
20
Mean
60
40
1st quartile
Lower fence
Minimum
2.2
Mean
n
xi
data values
= i=1 .
number of data points
n
2.3
Quantative variables
The range of the data is the difference between the maximum and the minimum.
This is not a robust measurement. The same can be told about the IQR too. The
deviation of an observation i is xi x. A good show of the spread of the whole
data is the:
2
(xi x)
variance = i=1
n1
Note that we divide by one less than the count of observation points. An
intuitive explanation for this is that the first observation does not tells us anything
aboutdeviation. The standard deviation (also noted as ) is the square root of
this ( variance), and shows the dispersion of a set of data from its mean.
2.4
50
50
30
27
24
20
Frequency
40
45
15
15
10
12
8
50
60
70
80
Figure 3: Histogram
The distribution is the pattern of values in the data, showing their frequency
of occurance relative to each other. The histogram is a good way to show this
graphically; you can see an example of this on figure 3.
Its key part is the number of bins used, as observations must be separated into
mutually exclusive and exhaustive bins. Cutpoints define where the bins start and
Contents
where they end. Each bin has its own frequency, the number of observations in it.
The largest bins define the peaks or modes. If a variable has a single peak we call
it an unimodal, bimodal for two peaks and multiple peaks above that.
2.5
Empirical rule
The empirical rule (also know as three rule) states that for a normal distribution
68% of the data is within one standard deviation of the mean value, 95% is within
two standard deviation, and 99.7% is within three standard deviation.
Categorical variables
Caterogical variables are not represented by numbers, so all of the earlier statistics
no longer make sense. What does make sense is the frequency of the categories,
which is graphically represented either by a barchart or a piechart.
Figure 5 shows an example of this. In case of barcharts we may choose to
normalize the frequency, by dividing it with the total number of observations.
10
Categorical variables
0.00
0.10
0.20
10 20 30 40 50
Americas
M.E&N.Afr
Americas
M.E&N.Afr
S.Asia
SS.Africa
M.E&N.Afr
1.1
In the R language
In the R lanugage we can draw a new boxplot per category to make the comparision. To separate categories we can use the split function, and finally use
non-modified boxplots (range is set to 0) to draw them, as seen on figure 6:
12
lifedata = read.table(LifeExpRegion.txt)
colnames(lifedata) = c(Country, LifeExp, Region)
attach(lifedata)
lifedata[Region==EAP, ]
lifesplit = split(lifedata, Region)
lifeEAP = lifedata[Region==EAP,]
lifeSSA = lifedata[Region == SSA, ]
boxplot(lifeEAP[,2], lifeSSA[,2], range=0, border=rainbow(2),
names=c(EAP, SSA), main="Life Expectancies: Box Plot")
boxplot(LifeExp~Region, range=0, border=rainbow(6),
main=Life Expectancies: Box Plot (all 6 regions))
50
55
60
65
70
75
80
Amer
EAP
EuCA
MENA
SAs
SSA
Contents
13
For instance let there be the two observations the gender of persons and their body
weight. One question we can ask is that are the same count of overweight female
as man?
2.1
Distributions
2.2
Categorical values in R
Categorical variables read into R are always sorted alphabetically, and therefore
any statistics about it will be displayed on that order. However, sometimes there
is a better order to this variables. In this case we can use the factor function and
its levels paramter to set a different order for the categories:
14
normal
0.5625
overweight
0.2025
obese
0.0500
We can even combine the relative and non relative values in a single table:
cbind(freqBMI, relfreqBMI)
To get joint and the conditional distribution for two categorical variables we
need to use the CrossTable function from the gmodels library.
library(gmodels)
joint = CrossTable(BMI, Sex, prop.chisq=FALSE)
Cell Contents # legend for the table below
|-------------------------|
|
N |
|
N / Row Total |
|
N / Col Total |
|
N / Table Total |
|-------------------------|
Total Observations in Table: 400
| Sex
BMI |
Male |
Female | Row Total |
-------------|-----------|-----------|-----------|
underweight |
46 |
28 |
74 |
Contents
|
0.622 |
0.378 |
0.185 |
|
0.164 |
0.235 |
|
|
0.115 |
0.070 |
|
-------------|-----------|-----------|-----------|
normal |
166 |
59 |
225 |
|
0.738 |
0.262 |
0.562 |
|
0.591 |
0.496 |
|
|
0.415 |
0.147 |
|
-------------|-----------|-----------|-----------|
overweight |
59 |
22 |
81 |
|
0.728 |
0.272 |
0.203 |
|
0.210 |
0.185 |
|
|
0.147 |
0.055 |
|
-------------|-----------|-----------|-----------|
obese |
10 |
10 |
20 |
|
0.500 |
0.500 |
0.050 |
|
0.036 |
0.084 |
|
|
0.025 |
0.025 |
|
-------------|-----------|-----------|-----------|
Column Total |
281 |
119 |
400 |
|
0.703 |
0.297 |
|
-------------|-----------|-----------|-----------|
15
16
50
60
70
80
Amer
EAP
EuCA
MENA
SAs
SSA
50
60
70
80
EAP
SSA
Contents
17
One approach is to convert one (or both) of the quantitive variables into categorical
variable and then just use the already seen methods. One way to create good
groups is to use the quartile breakdown. However, this does not uses the full
information of the quantitive variables.
One way to use it is to use a scatterplot (that is to use the quantitive variable
pairs as points in the space). On this we can use regression techniques to fit a line
on the points, effectively finding the relationship between the variables (correlation
means a rising line)
A numerical representation of this is the correlation. Let there be two variables
indicated by the series x1 , x2 , . . . , xn and y1 , y2 , . . . yn , the correlation is calculated
as:
n
correlation =
2
2
n
n
i=1 (xi x) i=1 (yi y)
Correlation values are between [1, 1], where 1 is perfect match, and 1 is the
perfect negative. If this is a positive value when one increases the other tends to
follow. However, note that this only captures the linear aspects of the relationship.
3.1
In the R lanuage
18
70
65
60
55
50
75
80
20000
40000
60000
80000
Contents
19
Sampling
The goal of the statistics is to make rational decision or conclusion based on the incomplete information that we have in our data. This process is knows as statistical
inference. The question is that if we see something in our date (like a relationship
between two variables) is it due to chance or a real relationship? If its not due to
change then what broder conclusions we can make, like generalize them to a larger
group, or does it supports a theoretical model? In this process the data collection
has a major significance.
We collect data from the real world, however our scientific and statistical models
are part of a theoretical world.
population the group we are interested in making conclusion about.
census a collection of data on the entire population. This would be the best,
however its impractical due to time and cost effectiveness; or its straight
up impossible if by observing the item we would destroy it. Therefore, in
practice we sample the population; and infer conclusions from the sample
group.
statistic is a value calculated from our observed data. It estimates a feature of
the theoretical world.
paramter is a feature of the theoretical world. Statistics are used to estimate
their values. In order to get a good estimation our sampling needs to be
representative.
randomisation is the key to select representative samples. This ensures that we
do not over or undersample any part of the population.
Methods to make random sampling:
Simple Random Sampling SRS Each possibly sample size of n (the sample
size) from the population is equally likely to be the sample that is choosen.
A pratical example of this is taking out balls from a hat.
Stratified sampling Divide the population into nonoverlapping subgroups called
strata and choose a SRS within each subgroup. Provinces and states are a
practical instances of stratas. This performs better when we may want to
compare stratas, or can allow to better see traits if something is only characteristic to only some of the stratas (which otherwise would be hidden on
the whole sample space).
20
Observational studies
Observational studies
Contents
21
Experiments
Are the golden standard. Allows for making conlusions. Again the response variable (or also know as dependent variable how it depends from other variables)
22
Experiments
Contents
23
3. Replication induce it. Not repeat the experiment, but to apply each treatment to more than one experimental unit. This allows to measure variability in the measurement of the response (which in turn also ensures that
treatment groups are more comparable by extraneous factors, by having the
oppurtunity of these to differ between groups).
Experiments also have a control group. This is used to make comparisions with
a treatment of interest and either does not receives a treatment (what if the study
itself causes the change to occur) or receives the current standard treatment. Its
also refered to as the comparision group.
In conclusion we can say the randomised controlled experiments are needed to
establish casual conclusion. Another technique to reduce the potential of bias is
blinding:
1. the experimental units are blinded, so they do not know which treatment
they have received.
2. the researcher is blinded if s/he does not know which treatment was given.
Experimetns can be singleblinded (only one type of blinding was used) or
doubleblind (if both types of blinding was used). You can also use the placebo
effect. People often show change when participating in an experiment wheater
or not they receive a treatment. Its given to the control group. A placebo is
something that is identical to the treatment recevied by the treatment groups,
except that it contains no active ingridients.
24
Experiments
Introduction to Probability
1
Up to this point weve seen and focused on how to handle data available to us.
Now its time to see what the data in the real world corresponds in the theoretical
world. The data of the real world, that we usually end up having, can be viewed
as just a sampling of the theoretical world (which has an infinit number of data
points). Even if it really isnt any more data points in the real world (so we
have collected all the existing data points) we can still pretend that there is and
construct a theoretical model around it.
We usually try to draw inferences about our theoretical world by using the data
that we have, which represents the real world. Of course, the theoretical world
may differ from the real world and well be interested in studying the relationship
between two world. For instance let us consider a coin toss. In the theoretical
world we expect to get 5 heads out of ten tosses, yet if we were to conduct a little
experiment we may end up getting 6 heads out of ten tosses.
In some cases (like tossing a bottle cap) we may not even have a theoretical
model, so the question arises that what we can conclude in this case?
Probability basics
Outcomes are possible data that our experiment may result in. The set of all
possible outcomes is the sample space and its noted as S.
Event is any subset of the sample space.
Probability each event has its own probability to turn true, and for event A:
0 P(A) 1
For example, the probability of some outcome is 1, as we always end up
having some result. The probability of the comploment event (meaning that
26
Probability distributions
event A is not true) is:
= 1 P(A)
P(A)
Probability distributions
For a coin or bear bottle fliping we use the binomial, not ass B(2, 12 ) distribution.
If the exponential of the binomial is one, we refer to it as te Bernoulli distribution:
Bernoulli( 12 ) =B(1, 12 ). The rolling of a dice is a discrete uniform distribution.
Mean is the expected value, what it equales on average
mean = = xP(x)
x
1
1
1
+1 +2 =1
4
2
4
Variance in the theoretical world measures the spread of the values from their
mean value. The formula is:
mean = E(Y ) = 0
variance = (x )2 P(x)
x
SD = variance =
1 1
=
4 2
Contents
27
The mean is linear, so any linear combination of two random variables may be
expressed with their means:
E(aX + bY ) = a E(X) + b E(Y )
For variance:
Var(aX) = a2 Var(X)
Var(aX + b) = a2 Var(X)
SD(aX) = a SD(X)
If X and Y are independent:
Var(X + Y ) = Var(X) + Var(Y )
Discrete random variables has a finit number of possible outcomes, and thus
may be enumerated in a form of a list. Continous random varaibles can take any
value inside an interval. An instance of this is the unifrom variable, for instance
on the interval from zero to one, meaning its equally likeley to be any number
inside this interval.
So for example P(0 X 1) = 1, and P(0 X 13 ) = 13 ; generally speaking
P(a X b) = b a, if 0 a b 1. This does mean that if a = b the probability
is zero, so its easier to think of continous probability as the area under the graph
of the density function. Its important that for any density function the total area
of the entire graph to be equal with 1.
Uniform distributions are of a form of square function, however other functions
exist to like the exponential(1) function has the form of:
ex if x > 0
f (x) =
if x 0
0
The standard normal (Gaussian) distribution (bellcurve):
1
x2
f (x) = e 2
2
New bellcurves may be constructed by shifting the upper with (making it the
new center point) and stretching it by a factor of , and is noted as Normal(, 2 ).
If we have a random variable X from this newly constructed normal distribution
we may transform it into a standard normal distribution by:
Z=
X
, where Z Normal(0, 1).
28
For expected values and standard deviation now we use integals instead of
sums, for example the expected value of the uniform distribution between 0 and 1
is:
1
E(X) =
xdx =
0
1
2
Var(X) =
1
1 2
(x ) dx =
2
12
E(X) =
x ex dx = 1
Var(X) =
(x 1) ex dx = 1
2
E(X) =
Var(X) =
1
x2
x e 2 dx = 0
2
1
x2
2
(x 0) e 2 dx = 1
2
What happens if you repeat an experiment lots of times and you look at the average
value you get. For example if you start flipping a coin a lot of times and you look
at the fraction of times you get head, you expect that the more you do it, the more
this comes closer to half. If you were to draw the probabilities of the coin flip on a
graph, youd observe that the shape starts to resemble the density function for the
normal distribution. The same is the case for dice rolling at looking the average
of the rolled numbers.
The Law of Large Numbers states that if an experiment is repeated over and
over, then the average result will converge to the experiments expected value.
Contents
29
The Central Limit Theorem states that if an experiment is repeated over and
over, then the probabilities for the average result will converge to a Normal
distribution.
Suppose that an experiment is repeated over and over, with outcomes: X1 , X2 ,
. . . and suppose each mean is E(Xi ) = m, and each variance is is Var(Xi ) = v. Now
= X1 +X2 +...+Xn be the average outcome. In this case we can say that:
let X
n
=
E(X)
Sampling distribution
From the real world we collect samples, which can be considered as part of the
theoretical worlds population. We have scientific and statistical models in the
theoretical world which have parameters of features that we do not know. We use
the science of statistics to estimate them.
Let there be p our parameter of interest, what we are observing, and let it
denote the number of heads of a coin flip sequence. From the real world (data) we
get some result. With this we can esitmate the parameter as:
p =
.
The observed value of an estimator varies from sample of data to sample of
data. The variability of this is called the sampling variability. The probability
distributions of the possible value of an estimator is its sampling distribution.
A statistic used to estimate a paramters is unbiased if the expected value of its
sampling distribution is equal to the value of the parameter being estimated (the
sampling probability is the same as the theoretical probability). How close the
distribution of the sampling gets to the parameter dependes on the varince of the
sampling distribution. This decreases with the number of samples, so the more
data we have the closer we get to the theoretical word.
30
Sampling distribution
Following the central limit theorem for large n the sampling distribution for
a Bernoulli event (happens or not, with probability p) is N(p, p(1p)
n ) (sampling
distribution of p).
If we take a normal distribution as a sample distribution with a normal distribution theoretical model we can calculate the expected value of the theoretical
model as the average of the sample we have, and the standard deviation of the
theoretical model decreases with the number of data compared to the standard
is
deviation of the sampling distribution ( n ). The sampling distribution of X
2
N(, n ).
For calculating the variance dividing with n 1 is important to make the estimator unbiased, while dividing with n will result in a bias estimator.
Confidence interval
We observe the real world in order to understand the theoretical world; for this
well use the scientific and statistical models divised in the theoretical world and
use data from from the real world to estimate the paramters from the model. We
do this in hope that what conclusions we can make from our models will also hold
in the real world.
Now let us imagine the experiment of tossing a fair coin ten times. This experiment has a binomial distribution with probability 12 , however when amassing data
from the real world we will not always get this proportion, due to the sampling
having its own distribution: for example extreme events (like getting ten heads)
are very unlikely, however getting half or close to half of them heads is likely to
happen. The question arrises where do we draw the line, what are the values well
likely to get most of the time?
Sometimes we will not have a model for our events: like in the case of flipping
a beer cap. If we were to perform a single experiment and get m one side out of n
events we can ask: was this a likely outcome? m may be a sample of any sample
distribution (with its own parameters). For instance for the beer cap let n = 1000
and m = 576. Our statistical model is binomial with a 1000 samples, however we
have no idea whats the probability of the model.
576
n
= 1000
= 0.576. This may be a good estimate, however it
An estimation is p = m
may be some other number, and so we ask what elso could p be? That is what is
an interval that based on our existing experiments the probability of getting one
side of the beer cap could be? We refer to this as the confidence interval.
The following methods are suppose that our data was taken in a form of simple
random sampling, and that our variables are independent; something that statistical inference requires, otherwise we may need more complicated model.
So in conclusionwe can say that the goal of statistical inference is to draw
conclusions about a population parameter based on the data in a sample (and
statistics calculated from the data). A goal of statistical inference is to identify a
range of plausible values for a population parameter. A goal of statistical inference
is to identify a range of plausible values for a population parameter. Inferential
procedures can be used on data that are collected from a random sample from a
32
population.
Let us follow the example of the bottle cap. We are searching for the real p
with the given estiamte p. The expected value of p is E(
p) = p the variance is
p(1p)
p(1p)
Var(
p) = n = 1000 . Now according to the center limit theorem because p is
the addition of lots and lots of small flips, it approximately follows the normal
distribution; that is p Normal (p, p(1p)
1000 ). Now by performing a reorganization:
p p
Normal(0, 1)
p 1p
n
Now for a normal distribution (and according to the empirical rule) the area
between [1.96, 1.96] covers 95% of the area, so most likely our sample is from this
interval. In mathematical formula:
R
R
RRRR p p RRRR
R
RRR > 1.96 = 0.05 = 5%
P RRR
RRR
RRRR p 1p
n RRR
R
By writing up the reverse, and expanding we can conclude that, with z 2 = 1.96:
margin of error
1p
1 p
= 95%
P p z 2 p
p p + z 2 p
n
n
lower limit
upper limit
This means that we are 95% confident that the value of p is between is lower and
upper limit. Now the true value of p is not random, however p is as we took a
random sample. Now the problem with this formula is that while we do know p, p
is unknown. One solution is to make p = p; or to make p = 12 because thats the
worst case, the widest interval we can get.
For a given confidence interval [a, b] the margin of error may be calculated as:
a = p margin of error
b = p margin of error
ba
margin of error =
2
Contents
33
Now modifying the area under the normal distribution that we take we can
get different confidence intervals for different probabilities. Now if you specify a
bigger confidence value, like 99% youll get a widder confidence interval. Its up
to you the trade off you are willing to accept.
Now assume you want to achive an probability that youre wrong. In this
instance taken the graph of the normal distribution you want to find z 2 (y axis)
such that the area remaining at each and is only 2 . In this case the area between
intervals z 2 to z 2 is 1 , which we are looking over, as now were missing of
the full area.
Now in order to get a proportion the size of the sample has a major impact. Lets
see how we can determinate the sample size we need to find a given proportion. We
do not want to go overboard with this as we may just waste resources or induce
unrequired effort to collect it. We assume that data was collected with simple
random sampling from a population.
So a margin of error is:
1p
margin of error = z 2 p
n
For isntance, if our confidence interval is 95%, then 2 = 10.95
= 0.025, and we
2
want a margin of error of = 0.03. The question is what n should be? For 95%
our normal quantile is 1.96. So:
1p
0.03 = 1.96 p
n
But we do not know what p will be. To bypass this we plan for the worst case
scenario, the expression p (1 p) has its maximum at p = 12 , which also give our
maximal margin of error for a given n. Now we resolve the equation:
2
1.96 12
n=(
) = 1067
0.03
Confidence intervals are about our confidence in the procedure to give us correct
results 95% confidence intervals should contain the population parameter p95% of
the time. If simple random samples are repeatedly taken from the population, due
to the randomness of the sampling process the observed proportion of p will change
from sample to sample and so will the confidence interval constructed around p.
Of these confidence intervals, 95% will include the true population proportion p.
34
In this case the data is not a categorical one, is instead a continous variable.
and this is also our estimation. The
In this case the expected value is = X,
2
variance is equal to n . Again if the data is made up of the sum of bunch of
small measurements, we can assume according to the center limit theorem that
Normal (, 2 ) . Again we reduce this to the standard normal distribution to
X
n
get:
X
Normal (0, 1)
2
n
Now the problem now is that we do not know the value of . What would
be a good estimation for this? One solution is to use the standard deviation of
X (calculated with dividing with n 1), noted as s. However, while E(s2 ) = 2 ,
substituting this into upper formula does not gives a normal distribution; instead
well have a t distribution with n 1 degrees of freedom:
X
tn1
s2
n
The t distribution is similar to the normal one, however not quite there. Increasing the degree of freedom reduces the difference between these two. With this
the z 2 changes also, so youll need to use a table to get the correct number for a
given number of freedom (which is n1, where n is the sample size). The marginal
error may be calculated as:
marginal error = z 2
s2
n
We can use this to calculate the true mean from a sample mean. That is with
a given confidence we can say that our true mean is somewhere inside the calculated confidence interval, which depends from the sample mean with the calculated
marginal error.
Contents
35
To use these methods many conditions must be satisfied, and an important part
of the statistical inference is to determine if these hold. Here are what we need to
make sure that they are true:
1. the n observations are independent,
2. n needs to be large enough, so that the central limit theorem may kick in
and p to have a normal distribution,
For extreme theoretical p values larger counts of samples are required to achieve
the same confidence interval. For instance in case of a coin flip if p is 12 a hundred
samples may be enough for the central limit theorem to kick in and achive a 95%
confidence. However, if p = 0.01 we may need 1000 samples for the same confidence.
An explanation for this is that the normal distribution is a continous model, and
we are using it to estimate discrete values.
In order for the confidence interval procedure for the true proportion to provide
reliable results, the total number of subjects surveyed should be large. If the true
population proportion is close to 0.5 a sample of 100 may be adequate. However,
if the true population proportion is closer to 0 or 1 a larger sample is required.
Increasing the sample size will not eliminate non-response bias. In the presence
of non-response bias, the confidence interval may not cover the true population
parameter at the specified level of confidence, regardless of the sample size. An
assumption for the construction of confidence intervals is that respondents are
selected independently.
In case of means the conditions are:
1. the n observations are independent
is approximatly normally distributed.
2. n needs to be large enough so that X
The t distribution works extremly well with even a low number of samples
(n = 10) if the theoretical model is a normal or skewed normal one. For this to
not be true we need some really extreme distribution, like most of the time on one
end, but has some chance for an outlier value. However, in these cases by just
increasing the mean with a constant multiplier (like to 40) may already result in
a 90% confidence value.
Nevertheless, we also need to consider if estimating the mean is a meaningful
thing to do. Remeber that the mean is not a robust measurement, because its
effected by outliers, something that is true for the distribution too.
36
A method for constructing a confidence interval is robust if the resulting confidence intervals include the theoretical paramter approximately the percentage
of time claimed by the confidence level, even if the necessary condition for the
confidence interval isnt satisfied.
1
1.1
38
is the flipping of a beer cap like a coint flipp? Afterwards you need to state an
alternative to the hypothesis.
A real world analogy is a court case. We start out from the assumption that
everyone is innocent until proven otherwise. In statistics this corresponds to the
null hypothesis (H0 pronounced as H-naught). This states that nothing is
1.2
The evidence
In the court to proove our claim well need evidence. In statistics the evidences are
proovided by the data we have. In order to make it useful we have to summarize
our data into a test statistic; whcih is a numeric representation of our data. This is
always formulated such that its assumed that the null hypothesis holds. In case of
the plastic surgery we summarize data assuming that the perceived age difference
is zero, to support our null hypothesis.
1.3
Delibiration
Once the evidences are presented the judge/jury delibirates if beyond a reasonable
doubt the null hypothesis holds or not. In statistics the tool of delibiration is
the pvalue. This transforms a test statistics into a probability scale; a number
between zero and one that quantifies how strong the evidence is against the null
hypothesis.
At this point we ask, if the H0 holds how likely would it be to observe a test
statistic of this magnitude or large just by chance? The numerical answer to this
question is the pvalue. The smaller this is the stronger evidance it is against the
null hyptothesis. However, it is not an a measurement to tell you how likely it is
that the H0 is true.
H0 is or it is not true, however its not a random variable, because its either
true or not true, no randomness involved. The pvalue just tells you how unlikely
the test statistic is if the H0 were to be true.
Contents
1.4
39
The verdict
The fourth final step is the verdict. Not strong enough evidance corresponds to
a high pvalue. With this we conclude that the data is consistent with the null
hypothesis. However we havent yet prooved that its true, although we cannot
reject it either. A small pvalue consist of sufficient evidence against H0 , and we
may reject it in favour of HA . In this case the result is statistically significant.
It remains one question to be answered: how small is small? Theres no clear
rules on this, hower its a good practice that p < 0.001 is very strong, 0.001 <
p < 0.01 is strong, 0.01 < p < 0.05 is moderate, while 0.05 < p < 0.1 is weak. A
larger pvalue holds no strength against H0 . The cut off value youl want to use
is context based, with smaller values if that would lead to inconveniances.
If there is not enough evidence to reject the null hypothesis, the conclusion of
the statistical test is that the data are consistent with the null hypothesis. The
null hypothesis cannot be proven true. For example, imagine an experiment where
a coin is flipped 100 times and we observe 52 heads. The null hypothesis may be
that the coin is fair and the probability of heads on any flip is 0.5. The alternative
hypothesis would be that the probability of heads on any one flip is not 0.5.
The observed data show 52 heads out of 100 flips and that is not convincing
evidence to reject the null hypothesis. These data are consistent with the null
hypothesis p = 0.5 but would also be consistent with a null hypothesis that the
probability of heads is 0.5000001 or 0.501 or 0.52 or many other values.
Well use the following situation to visualize: there has been a poll with n = 1, 046
people from which 42% said that they support the mayor. So the true support for
the mayor is unknown (p), however the estimated is p = 0.42. What we want to
know if the true support is less than 50%:
H0 p = 0.5
HA p < 0.5
Question: can we reject H0 ?
We want to compute the pvalue. We start out from our theoretical model,
that:
p p
Normal(0, 1)
test statistic =
p 1p
n
40
1
1 1 2
2 1046
p p
1
0.08
P (Normal (0, 1) 5.17)
P(
pp < 0.08) = P
1
1
9, 000, 000
1 1 2
1 1 2
2 1046
2 1046
This is an extremly small value, which means that our evidence against the
null hypothesis is strong, therefore we may reject it, and conclude that the mayors
support is less than 50%. If we were to put in a lower number into the equation
we may check just how small the mayors support it is. For example in case of 44%
wel get a numbe of 0.0968, which means that under the null hypothesis we have
a 10% chance to sample this, and thats not a strong evidence aginst H0 .
If we have a two sided hypothesis then instead of P(Z test statistic) we can
say that P(Z test statistic)=P(Z test statistic) + P(Z < test statistic),
which in case of symmetric distributions translates to 2P(Z test statistic).
As test case for this well use the plastic surgery data, where we were interested
just how much younger are people perceived after the surgery. The question is
that all patiance will look younger or have we been just lucky? We had n = 60
persons in the survey, where after the surgery people looked younger on average
= 7.177 years with a standard deviation of 2.948 years.
with X
So formulating the problem we can say that in the real world we have the true
= 7.177. Our null hypothesis
mean (noted as ), with an estimated mean of
=X
is that people do not look younger, H0 = 0; with hour alternative hypothesis is
that HA > 0. Now we saw previously that:
X
tn1
s2
n
Contents
41
7.177
7.177) = P X
P (X
s2
sn2
n
Now we got something for what we know its distribution, its:
7.177
1
= P (t59 18.86) = 26
P t59
2
10
(2.948)
60
Which is really small, and such we can reject the null hypothesis.
We start out from the null and the alternative hypothesis. We then calculate thep
pvalue representing the probability of observing the value of the test statistic
given that H0 is true. Small pvalues are evidence against the H0 .
significance level of a test gives a cutoff value for how small is small for a p
value. We note it with: . This gives a definition for the reasonable doubt
part of the method. It shows how the testing would perform in repeated
sampling. If we collect test statistics again and again sometime well get
weird data. So for a significance level of 1% in case of 100 well draw the
wrong conclusion in one case (reject H0 ). Setting it to small however would
result that you never reject H0 .
power of the test is the probability of making a correct decision (by rejecting the
null hypothesis) when the null hypothesis is false. Higher power tests are
better at detecting false H0 . To do this youll need to:
power increases the further our alternative hypothesis is from the null
hypothesis (however in practice this is rarely in our control),
is increased then less evidence is required from the data in order to
reject the null hypothesis; therefore for increased , even if the null
hypothesis is the correct model for the data, it is more likely that the
null hypothesis will be rejected;
less variability increases,
increasing the sample sizea also helps.
To determine the sample size needed for the a study for which the goal is to
get a significant result from a test, set and the desired power, decide on
42
type I error is incorrectly rejecting the null hypothesis, as we got data that
seemed unlikely with the data we got. For a fixed significance level the
probability of this happening is .
type II error is incorrectly not rejecting the null hypothesis, the probability of
this is = 1 power. This usually happens because we do not have enough
power due to small data size.
Generally we would like to decrease both types of errors, which can be done
by increasing the sample size (as having more evidence in a trial). Its context
dependent which of the errors are more important.
Do not miss interpret pvalues. It does not tell you how likely it is that H0 is true.
It tells you how likely the observed data would be if H0 holds. The pvalue is a
measure of significance. Therefore report it whenever you make conclusions.
Data collection matters! Testing cannot correct flaws in the design of the data
collection: not randomly choosing samples, lack of control, lack of randomising
in assigning treatments. All these would lead to bias, confounding and variables
inable to make conclusions.
Always use twosided test, unless youre sure apriori that one direction is of no
interest.
Statistical significance is not the same as practical significance. Just because
its significant statistically it does not mean that its also meaningful practicaly.
For example let us suppose we want to measure the tempatures of people. For this
well use a huge sample size, compared to the small already existing one.
The increased sample size means that even a small difference from the expected
mean temperature under the null hypothesis may be declared statistically significant. For example we could demonstrate that a difference of 0.005 degrees would
be significant statistically. While this is very precise, there is not much practical
significance especially because most commercial thermometers do not measure the
temperature with such a high precision.
A large pvalue does not necessarily means that the null hypothesis is true.
There may not be enough power to reject it. Small pvalues may happen due:
chance,
data collection issues (lack of randomization),
Contents
43
44
46
Matched pairs
will contain 97 as a plausible value for . All confidence intervals with confidence
level less than 98% will not contain 97 as a plausible value for .
Matched pairs
There are situations when we no longer need to observe a single variable. Sometimes there are two different measurements/groups, and tell if these two differ.
This section will look into methods to do just this.
As case study let us take the antrepology example, that is estimating the age
of death for persons taken from observing bones. Our observation values are the
errors of the methods from the actual ages of death. The DiGangi method tends to
underestimate the age of death, as for more than 75% of measurements the error
is negative. This is on average 14.
The SucheyBrooks method uses different bone to estimate, though may offer
different values. The mean for this is 7, so this too underestimates. However,
these numbers are taken from just a handful (n = 400) samples, so the question
rises are these two methods actually different?
Matched pairs are useful when we want to compare two conditional variables,
and we are able to compare them on the same or two very similar observational or
experimental units (same subject, or twins/siblings). For example age estimates
on the same skeletons with different methods, or testing the same subjects under
two different condition (pre and post course).
However, the matching can also be time, like when we want to compare two
branches of a business, and we compare sales for the same week. The crossover
study is when we have two treatments, and we compare their outcome by giving
to each experimental unit first one treatment, and then the other one.
This means that we do not need to randomize experimental units, as everyone
will get all treatments. However, the order each experimental unit receives the
treatment is randomized, to avoid cases that the effect of a later treatment is
affected by an earlier treatment. Therefore, the variation in the comparision does
not include the variation among observational or experimental units, resuling in
a greater precision in the estimate of the difference between the treatments or
conditions being compared.
Measurements on a matched pair are not independent, resulting in the better
precision, however the analysis must reflect the pairing. The general approach is
to treat each pair as a separate observation, and then they will all be independent,
because each sceletong is independent. A way to do this is to take the difference
between the two methods as a new observed variable.
A practical problem is that what we do for experimental units for which one
of the measurements is missing, not available? An easy solution is to just ignore
Contents
47
these. Now let the difference be noted as d. We state our null hypothesis that there
is not difference between the methods, that is H0 d = 0 versus Ha d 0. From
d = 6.854, sd = 11.056, and n = 398. We can formulate the
our dat we know what X
test statistic as:
d 0
X
sd
6.854 0
=
11.056
398
12.37
If the null hypothesis is true this test statistic should be a value from a t
distribution with a 398 degree of freedom. The degree is quite high, so the t
distribution is very close to the standard normal distribution. In case of a standard normal distribution 99.7% of data is within three standard deviation, that
is between [3, 3]. So a value greater than 12.37 or smaller than 12.37 is very
unlikely. If one were to calculate a pvalue it would contain 29 zeros after the
decimal point.
We can conclude that we have an extremely strong evidence that the null
hypothesis is not true. So the two methods results differ, in giving the mean of
the age error estimation.
To construct the confidence interval with a 95% confidence, for a data set with
398 observation we need to take a tdistribution with 397 degree of freedom, for
whoose area is 0.95, thush resulting in the critical values of 1.96 and +1.96:
11.056
sd
d critical value
= 6.8554 1.966
[5.76, 7.94]
X
n
398
Therefore, we estimate the difference between the two methods is low as 5.76
and high as 7.94. Because the hypothesis test held a strong evidence that the
mean of the difference is not zero, this confidence interval does not contains the
zero value.
Its important to understand that multiple measurements on the same subject
are not independent since two measurements within one subject are more likely to
be similar than two measurements on different subjects. For example, two math
test scores for the same student will likely be more similar than math test scores
chosen from two random students.
In this case again we have two different conditional variables, however now they
are no longer result of observing the same experimental unit, they are independent:
sample size n1 , success fraction p1 ,
48
Comparing means
sample size n2 , success fraction p2 .
p1 p2 1.96
p1 (1 p1 ) p2 (1 p2 )
+
n1
n2
From the resulting confidence interval one may conclude if the difference changed,
either in a positive or a negative direction.
In case of a hypothesis test the same logic applies, let there be H0 p1 = p2
p2 )(p1 p2 )
. Now under H0 ,
versus HA p1 p2 , knowing that Normal(0, 1) (pp1(1p
) p (1p )
1
n1
n2
n1 p1 +n2 p2
,
n1 +n2
(
p1 p2 ) (p1 p2 )
Normal(0, 1)
p (1 p) ( n11 + n12 )
This may be used to compute pvalue: P (p1 p2 p).
Comparing means
To compare the difference between two independent variables true means with a
1, X
2 and variance of s2 and s2 one can reuse
sample size of n1 and n2 , means of X
1
2
the following fact:
1 X
2 ) (1 2 )
(X
tdf
2
s1
s21
+
n1
n1
Contents
49
The relation approximatly hold however the degree of freedom is given by the
equation:
s2
df =
s2
( n11 + n22 )
s2
s2
( n1 )
n1 1
( n2 )
n2 1
2
2
1 X
2 T ,df s1 + s2
X
2
n1 n2
For hypothesis test, the null hypothesis test is that the difference is zero. Now
we reusze the uppper approximation to compute pvalues. Under the null hypothesis we cannot pool together the means, like we did for proportions unless we also
assume that the two series have the same variance. The pvalue may be found as:
1 X
2
X
P tdf 2
s1
s22
+
n1
n2
where the degree of freedom is calculated with the upper formula.
If the variances are equal we can pool them, as under H0 we have the same mean
(n 1)s21 +(n2 1)s22
and variance, by using for both s21 and s22 the pooled variance: s2 = 1(n1 1)+(n
.
2 1)
Under this assumption to calculation of degree of freedom simplifies to the forumla
df = (n1 1) + (n2 1) = n1 n2 2. Now, note that the sample variance does not
need to be the same, what needs to be equal is the true variance of the samples
(12 and 22 ).
50
Comparing means
Linear Regression
Linear regression (or lines of best fit) may be used to understand linear relationship
between data. For two variables the dependent (response) variable is noted as
Y , while the independent (explanatory, predictor) variable is noted as X. The
equation of a line looks like y = b0 + b1 x. Now we try to come up for values for b0
and b1 which are the best they can be to describe the data as well as possible.
For this one can assume that a given point (xi ) the yi may be calculates as
b0 + b1 xi . However, the actual value we have is yi making the difference between
them yi (b0 +b1 xi ) (residiual). Now we dont want them to be just small, because
then they may turn negative, a nice way of doing this is to make the (yi b0 b1 xi )2
as small as possible.
So for a whole dataset we will try to minimize the sum of squares:
n
SS = (yi b0 b1 xi )2
i=1
SS =
SS = 0
b0
b1
resulting in the formulas:
n
1 n
( yi b1 xi )
n i=1
i=1
n ni=1 xi yi ni=1 xi ni=1 yi
b1 =
2
n ni=1 x2i (ni=1 xi )
b0 =
b0 is also called the intercept and b1 is the slope. A slope (b0 ) of means that
when the explanatory variable increases by 1 the response variable increases by
. The intercept variable is what the response variable would be in case of an
explanatory variable with zero value. It is how much we have to shift the line in
order to get the line of best fit.
Regression is not symmetric, that is if you swap the explanatory and the predictor variable the slope value will not turn out to be the inverse.
52
n
1 n
( yi b1 xi ) = y b1 x
n i=1
i=1
b0 + b1 xi = (
y b1 x) + b1 xi =
1
n
n
1
y
n i=1 xi yi x
2
n
1
2
x)
n i=1 xi (
)
(x
i
i
i=1
i=1
n
.
Now well reformulate this to:
n
n
sy
i=1 (yi y)2
i=1 (xi x)(yi y)
n
= b1 = R
n
n
2
2
2
sx
i=1 (xi x) i=1 (yi y)
i=1 (xi x)
, where R is the correlation, and sx is the sample standard deviation for x with sy
the sample standard deviation for y.
Contents
53
The residiual is the difference between the actual and the predicted value, for
the ith it may be calculated ei = yi b0 b1 xi , the mean e is 0, as sometime we
overshoot, sometime we underestimate however on average we get it just right.
1 n
(ei e)2
n 1 i=1
sy 1 n
sy
= s2y + (R )2 s2x 2R
(xi x)(yi y)
sx
sx n 1 i=1
= s2y (1 R2 )
= Var(y)(1 R2 )
Var(e) =
How much certainty is there on the values introduced in the earlier section? Maybe
those values are right for the data world, however the true values (for the theoretical
world) there may be slight variations. For the theoretical world we note the true
slope as 1 . How close is the data world slope (b1 ) to 1 . Its true that the
b1 1
tn2 , that is it follows a tdistribution with n 2 degree of
distribution of SE(b
1)
freedom (one degree loose for the slope, and one extra for the intersect).
The SE(b1 ) is the standard error involved given by the formula:
n
i=1 e2i
SE(b1 ) =
where ei = yi b0 b1 xi
(n 2) ni=1 (xi x)2
Now, we can use this tdistribution to calculate (carry out) for the slope the:
confidence interval of (1 ) for 1 is b1 t 2 ,n2 SE(b1 )
hypothesis test we start out again from the tdistribution written up, use the
null hypothesis that H0 1 = 0 and the alternative that HA 1 0.
For the pvalue well use the formula: P(b1 1 b1 ) under the null hypothb1
esis, which can be rewritten to P(tn2 SE(b
). If the pvalue is small we
1)
54
As for limitations we can say that linear regression can only tell about linear
relationship. It tells nothing for other relationships like (like quadratic). Moreover
there are cases when even if we have perfect numbers, we still cannot tell if the
numbers tell, what they mean. Therefore, its important to draw the data, and
visualize it. Linear regression is not robust to the presence of outliers. Therefore,
its important whenever analyzing data that these may skew the results.
The value of the slope b1 should be interpreted relative to its standard error,
SE(b1 ). For example, imagine a regression of weight on height for college students.
We know that there is a strong positive association between these two quantities
so it would not be surprising to be able to reject the null hypothesis that 1 = 0.
This will be true regardless of the units of measurement of either variable.
Note that the estimates of the slope and the standard error will change depending on the units of measurement. If weights are measured in kilograms but
height in millimetres, b1 and SE(b1 ) will be large values. If weights are measured
in kilograms but height in kilometres, b1 and SE(b1 ) will be extremely small values
but there will still be strong evidence of a linear relationship between height and
weight.
The question will cover that are the two variables relationship well described by a
linear relationship? For this well study the residual values (the difference between
where a predicted value should be accoring to the linear regression, and where it
actually is). In case of linear relationship the residiuals should not have any trend,
there should be equally spread out in the space if one were to plot them.
If the residual plot shows a trend it means that the linear relationship does not
tells the whole story. For such cases, one could do a transformation to eliminate
such. A such transformation is the logarithmic transformation, with a base 10 for
example.
Now note that the transformation on the plot also transforms the slope values,
so the increase/decrease is no longer linear.
Formulate
A clearly stated purpose makes the results of any study more compelling. It is
always the case that performing multiple tests increases the chance of making a
type I error, as would be the case if we were to perform a statistical test repeatedly
for each of multiple outcome variables.
56
Design
Design
We want to design and implement a study to acquire data that are relevant to
the research questions. To do this we need to be able to generalise conclusions
to a population of interest, avoid any systematic error (biases) in both the measurements and conclusions, reduce the non systematic error caused by the natural
variability for more precises estimates and power in our conclusions, and ensure
that the sample is appropiate to have enough power to make conclusions (and to
avoid making unnecessary measurements).
The first question is what kind of study to carry out:
anecdotal Anecdotal evidence doesnt lead to conclusions that can be generalized
beyond the current observations.
sample survey (poll) requires to be able to make measurements only with questions
observational study while allows to conduct the observations in the experimental units natural enviroment, it does not allow making casual conclusions.
However, it carries the risk of common cause and confounding variables.
experiment if controlled and ramomized allow us to make full inference. The
principles of an experimental design is control (what we can hold some
variables constant, use of blocking choose subjects appropiately with inclusion and exclusion criterias), randomisatio (selection of experimental units
and assigment of treatments)n and replication (number of subjects to acquire
statistically significant results).
Contents
57
At this step we need to identify problems in the data and also some of the preliminary analisys. Look for obvious errors and identify features (outliers, skew,
etcetera) that may effect the choice of analysis.
For categorical variables use barplots and pie charts. For quantative variable
use boxplots and histograms. Missing values, if they are completly random, will
not introduce bias into the results. However, if they missing is due to some causual
effect then those values are informatitive and their lack may potentially lead to
bias results.
Formal analysis
Formal and preleminary analysis are an interative process. What you learn from
formal analisys may cause to look at the data in new ways, which then will lead
to new formal analysis.
Although the ttest is a very robust measurement, it does not works in every
situation (non normal distribution), so its important to take a look at the date
first prior to applying it. The removal (or lack) of outliers will give more power
for our statistical tests (resulting in lower pvalues).
The central limit theorem needs large sample sizes when making comparisions
with proportions that are close to zero. There are cases that were not covered
inside this course:
analysis of variance to compare more than two means
longitudinal analysis analyse not just the difference (and start plus end point)
between two values, but the way those two variables got there.
control the chance of making type I errors in multiple tests.n