Vous êtes sur la page 1sur 2

Hypothesis testing

The framework of hypothesis testing is:


Constructing a study hypothesis regarding the entire population
Construct a null hypothesis which is the opposite of the study hypothesis.
Study hypothesis: An assumption that is usually based on a biological fact (smoking
increases the risk of lung cancer, DM increases the risk of CKD...etc)
Null hypothesis: Consider there is no difference between the two groups.
Because our research results are not always 100%, the null hypothesis says that the
findings are the results of chance or random factors.
Based on your results, if the incidence of having the results from the chance is less than
5% (↓0.05), the null hypothesis will be rejected. If the incidence of having the results
from the chance is more than 5%, the null hypothesis still works.

If you couldn't find significant results with a new medication in a clinical trial, you can't
say absolutely this drug doesn't work because you may have something wrong in data
collection, sample population, confounding or the result's interpretation.
It's better to say I couldn't reject the null hypothesis and leave the doors open for
another trial that may come up with different results
Because the research results will be published in the literature and will be as a
guideline, they did it little bit hard and difficult. Accordingly, if the results have 5% or
more probability of chance, the results are not significant and the p-value is expressed
as more than 0.05, the null hypothesis couldn’t be rejected.
If the results have 5% or less probability of chance, the results are significant and the p-
value is expressed as less than 0.05, the null hypothesis is rejected.
After doing a research, you are still not 100% right; you still have a probability of error.
There are two types of errors:

Type one (alpha): You rejected the null hypothesis, but still your results may be wrong
and actually there was no statistically significance. (Type one, you are number one, and
you did it, you reject the null hypothesis)

Type two (beta): You couldn't reject null hypothesis, but still your results may be wrong
and actually there was statistically significance.
Type II errors, in which potentially important positive findings are missed due to chance,
are generally considered to be less toxic than type I errors, which report falsely positive
findings. Type II errors often occur due to inadequate study power.

Power: Power is the probability that a particular study will not make a type II error.
Power represents the ability of a statistical test to detect some specified difference or
effect. Power= 1- B