Académique Documents
Professionnel Documents
Culture Documents
The violation of the homogeneity of variance assumption becomes particularly problematic when the sample
size in each group are different.
In such cases, what to do if Levene test comes out to be significant: SPSS offers two corrected versions of
the F called Brown-Forsythe F and Welchs F which you could rely upon.
-
Normally distributed population distributions: Each sample is drawn from a population that is
normally distributed
normality assumption. BUT if the group sizes differ, the accuracy of F is affected by skew and nonnormality and affects the power of F in some unpredictable ways.
In such cases, you might employ a transformation which could be used to correct for non-normality and
unequality of variances. Transforming the Y values to remedy nonnormality often results in correcting
heteroscedasticity (unequal variances). Occasionally, both the X and Y variables are transformed.
Last source of solution would be to apply Kruskal-Wallis Test (which is a non-parametric test)
-
Independent observations: Each observation within a group is assumed to be independent of all other
observations.
Violations of the assumptions of independence are very serious. If observations across groups are
correlated, then the Type I error is substantially inflated.
What Post Hoc Tests Are?
The F ratio tells us only whether the model fitted to the data accounts for more variation than extranous
factors, but it doesnt tell us where the differences between groups lie.
SO a statistically significant F value means >> one or more of the differences between means are
statistically significant BUT which groups differ?
One easy solution that comes to mind is to make pairwise t-tests for all pairs BUT this would inflate Type I
error: When we have completed running a set of comparisons among our group means, we will arrive at a
set (often called a family) of conclusions. For example the family might consist of the statements:
The probability that this family of conclusions will contain at least one Type I error is called the familywise
error rate (FWER)
SO
We need a way to compare means without inflating the probability of making Type I error:
i.e. comparing every group as if conducting several t-tests but to use a stricter acceptance criterion such that
the familywise error rate does not rise above 0.05. This is exactly what post hoc tests do.
REF: I heavily relied upon Discovering Statistics Using SPSS by Andy Field
A Working Example
(REF: Dr Esra Mungans Lecture Notes)
Suppose you have a moderately difficult Sudoku puzzle and you are interested in whether, and if so, how
much, the level of background noise will affect your subjects speed in solving the puzzle. Your data is
listed below:
Table - Duration (in min) to solve a moderately difficult Sudoku puzzle as a function of background noise
silent
5
3
1
low noise
6
6
3
strong noise
10
5
6
Solutions to be given