Vous êtes sur la page 1sur 37

Lorelei Howard and Nick Wright

MfD 2008
t-tests, ANOVA and regression
- and their application to the statistical analysis of fMRI data


Overview

Why do we need statistics?
P values
T-tests
ANOVA

Why do we need statistics?
To enable us to test experimental hypotheses
H
0
= null hypothesis
H
1
= experimental hypothesis

In terms of fMRI
Null = no difference in brain activation between these
2 conditions
Exp = there is a difference in brain activation between
these 2 conditions

2 types of statistics
Descriptive Stats
e.g., mean and standard deviation (S.D)

Inferential statistics
t-tests, ANOVAs and regression

Issues when making inferences
So how do we know whether the effect
observed in our sample was genuine?

We dont

Instead we use p values to indicate our
level of certainty that our results represent
a genuine effect present in the whole
population
P values
P values = the probability that the observed
result was obtained by chance
i.e. when the null hypothesis is true

level is set a priori (Usually 0.05)

If p < level then we reject the null hypothesis
and accept the experimental hypothesis
95% certain that our experimental effect is genuine
If however, p > level then we reject the
experimental hypothesis and accept the null
hypothesis
Two types of errors
Type I error = false positive

level of 0.05 means that there is 5% risk
that a type I error will be encountered

Type II error = false negative
t-tests
Compare two group means
Hypothetical experiment
Time
Q does viewing pictures of the Simpson and the Griffin
family activate the same brain regions?
Condition 1 = Simpson family faces
Condition 2 = Griffin family faces
Calculating T
2 1
2 1
x x
s
x x
t

2
2
2
1
2
1
2 1
n
s
n
s
s
x x

Group 1 Group 2
Difference between the means divided by the pooled
standard error of the mean
How do we apply this to fMRI
data analysis?
Time
Degrees of freedom
= number of unconstrained data points
Which in this case = number of data points
1.

Can use t value and df to find the
associated p value
Then compare to the level
Different types of t-test
2 sample t tests
Related = two samples related, i.e. same
people in both conditions
Independent = two independent samples, i.e.
diff people in 2 conditions

One sample t tests
compare the mean of one sample to a given
value
Another approach to group differences
Analysis Of VAriance (ANOVA)
Variances not means
Multiple groups
e.g. Different facial expressions

H
0
= no differences between groups
H
1
= differences between groups
Calculating F
F = the between group variance divided by
the within group variance
the model variance/error variance

for F to be significant the between group
variance should be considerably larger
than the within group variance
What can be concluded from a
significant ANOVA?

There is a significant difference between
the groups

NOT where this difference lies

Finding exactly where the differences lie
requires further statistical analyses
Different types of ANOVA
One-way ANOVA
One factor with more than 2 levels

Factorial ANOVAs
More than 1 factor

Mixed design ANOVAs
Some factors independent, others related

Conclusions
T-tests assess if two group means differ
significantly
Can compare two samples or one sample
to a given value
ANOVAs compare more than two groups
or more complicated scenarios
They use variances instead of means
Further reading
Howell. Statistical methods for psychologists

Howitt and Cramer. An introduction to statistics in psychology

Huettel. Functional magnetic resonance imaging (especially chapter 12)
Acknowledgements
MfD Slides 2005 2007
PART 2
Correlation
Regression
Relevance to GLM and SPM
Correlation
Strength and direction of the relationship
between variables
Scattergrams
Y
X
Y Y
X
Y
Y Y
Positive correlation Negative correlation No correlation
Describe correlation:
covariance
A statistic representing the degree to which 2
variables vary together

Covariance formula



cf. variance formula


but
the absolute value of cov(x,y) is also a function of the
standard deviations of x and y.
n
y y x x
y x
i
n
i
i
) )( (
) , cov(
1

n
x x
S
n
i
i
x
2
1
2
) (

Describe correlation: Pearson


correlation coefficient (r)
Equation

r = -1 (max. negative correlation); r = 0 (no constant
relationship); r = 1 (max. positive correlation)

Limitations:
Sensitive to extreme values, e.g.

r is an estimate from the sample, but does it
represent the population parameter?
Relationship not a prediction.
y x
xy
s s
y x
r
) , cov(

s = st dev of sample
0
1
2
3
4
5
0 1 2 3 4 5 6
Summary
Correlation
Regression
Relevance to SPM
Regression
Regression: Prediction of one variable
from knowledge of one or more other
variables.
Regression v. correlation: Regression
allows you to predict one variable from the
other (not just say if there is an
association).
Linear regression aims to fit a straight line
to data that for any value of x gives the
best prediction of y.
Best fit line, minimising sum
of squared errors
Describing the line as in GCSE maths: y = m x + c
Here, = bx + a
: predicted value of y
b: slope of regression line
a: intercept
Residual error (): Difference between obtained and predicted values of
y (i.e. y- ).
Best fit line (values of b and a) is the one that minimises the sum of squared
errors (SS
error
) (y- )
2

= residual

= y
i
, observed
= , predicted
= bx + a
How to minimise SS
error
Minimise (y- )
2
, which is (y-
bx+a)
2
Plotting SS
error
for each
possible regression line gives a
parabola.
Minimum SS
error
is at the
bottom of the curve where the
gradient is zero and this can
found with calculus.
Take partial derivatives of (y-
bx-a)
2
and solve for 0 as
simultaneous equations, giving:

Values of a and b
S
u
m
s

o
f

s
q
u
a
r
e
d

e
r
r
o
r

(
S
S
e
r
r
o
r
)


Gradient = 0
min SS
error
x
y
s
rs
b
x b y a
How good is the model?
We can calculate the regression line for any data, but how well does it fit
the data?

Total variance = predicted variance + error variance
s
y
2
= s

2
+ s
er
2
Also, it can be shown that r
2
is the proportion of the variance in y that is
explained by our regression model
r
2
= s

2
/ s
y
2



Insert r
2
s
y
2
into s
y
2
= s

2
+ s
er
2
and rearrange to get:

s
er
2
= s
y
2
(1 r
2
)
From this we can see that the greater the correlation the smaller the error
variance, so the better our prediction
Is the model significant?
i.e. do we get a significantly better prediction of y
from our regression equation than by just
predicting the mean?

F-statistic:
F
(df

,df
er
) =
s

2

s
er
2

=......=
r
2
(n - 2)
2

1 r
2

complicated
rearranging
And it follows that:
t
(n-2)
=
r

(n - 2)
1 r
2

So all we need to
know are r and n !
Summary
Correlation
Regression
Relevance to SPM
General Linear Model
Linear regression is actually a form of the
General Linear Model where the
parameters are b, the slope of the line,
and a, the intercept.
y = bx + a +
A General Linear Model is just any model
that describes the data in terms of a
straight line

a


m


b
3

b
4

b
5

b
6

b
7

b
8

b
9
+
e
=
b
+
Y X

=
One voxel: The GLM
Our aim: Solve equation for tells us how much BOLD signal is explained by X
Multiple regression
Multiple regression is used to determine the effect of a
number of independent variables, x
1
, x
2
, x
3
etc., on a
single dependent variable, y
The different x variables are combined in a linear way
and each has its own regression coefficient:

y = b
0
+ b
1
x
1
+ b
2
x
2
+..+ b
n
x
n
+

The a parameters reflect the independent contribution of
each independent variable, x, to the value of the
dependent variable, y.
i.e. the amount of variance in y that is accounted for by
each x variable after all the other x variables have been
accounted for
SPM
Linear regression is a GLM that models the effect of one
independent variable, x, on one dependent variable, y

Multiple Regression models the effect of several
independent variables, x
1
,

x
2
etc, on one dependent
variable, y

Both are types of General Linear Model

This is what SPM does and will be explained soon


Summary
Correlation
Regression
Relevance to SPM



Thanks!

Vous aimerez peut-être aussi