Vous êtes sur la page 1sur 4

Factor Analysis

SPSS2, Seminar 3 (Friday week 4 or Tuesday week 5)

Aims of this week’s seminar:

This week we’ll be looking at factor analysis. This type of analysis is used to reduce a
larger number of manifest variables down to a smaller number of latent variables. In this
week’s seminar there are three examples for you to work through. The data files are, as
usual, located on the SPSS2 intranet site, to be found at
http://www.lifesci.sussex.ac.uk/teaching/index.php?id=909C8. The files are called
“factor analysis ex 1”, “factor analysis ex 2” and “factor analysis ex 3”. One note of
warning if you’re printing: factor analysis can produce loads of output.

Exercise one

This set of data gives the responses of children to a number of questions asking them
about school. We will run a factor analysis on this data to see if these variables can be
reduced to reveal a number of latent variables.
To run this analysis go to Analyse > Data
reduction > Factor and the following dialog
box will open.
Select all of the
variables and
move them across
to the “Variables”
box. Next you
need to select some of the options. First click on
“Descriptives” and select the options shown. Click
“Continue” to get back to the main factor analysis dialog
box. Next click on “Extraction”. The options
that you select tell SPSS how to extract the
factors from the variable you have entered into
the analysis. Most of the options in this box
should be there by default. Make sure that the
selected method is “Principle components”
and that you have selected the “Scree plot”
option. Click “Continue” to get back to the
main factor analysis dialog box. Next click on “Rotation”. The extraction of factors can
be improved upon by rotation, and here you tell SPSS
which method of rotation to use (for more details about
methods of rotation see Andy Field’s book, pp. 438-441,
449-451). In this box select the “Varimax” method of
rotation and select the “Loading plots” option. Click
“Continue” to get back to the main factor analysis dialog
box. Next click on “Scores”. By selecting options in this
box SPSS will save new
variables in the data view that represent each participant’s
performance for each of the extracted factors or new
latent variables. This is useful if you want to run any
further analysis on these factor scores. Select “Save as
variables” using the “Anderson-Rubin” method. Click

Dr Sam Knowles (skzk20@susx.ac.uk) 1

“Continue” to get back to the main factor analysis dialog box. Finally click on “Options”.
Your first option is based on how to deal with missing data. The best method to select is
to “Exclude cases pairwise”. Second you can select
options to make your final factor analysis solution
easier to understand and interpret. Select “Sorted
by size” and to “Suppress absolute values”. Make
sure you change this value to 0.40. Click on
“Continue” to get back to the factor analysis dialog
box and then “OK” to get the output from the

As mentioned earlier, you get lots of output from a factor analysis. Not all of the output
will be mentioned here, just the most important sections for the interpretation of the
analysis. Some sections will be referred to but not reproduced in the handout. First you
get a descriptive statistics table (not shown here). This tells you about participants’
performance on each of these variables. Next you get a massive table that gives you the
correlation coefficients and significance levels for the correlations between each of the
variables (also not shown here). This table is important as you would expect some of the
variables to be correlated if they are representing the same underlying latent variable,
although you don’t want all of the variables to be highly correlated as this would indicate
singularity. Hidden at the bottom left of this table is the determinant statistic that tests
for the problem of singularity. You want this value to be greater than 0.00001. In this
case it is 0.000017 so it can be assumed that there is no singularity in the data. The next
table gives the KMO and Bartlett’s statistics (shown below). Each of these assesses
KMO and Bartlett's Test
whether there are patterns of correlations in
Kaiser-Meyer-Olkin Measure of Sampling the data that indicate that factor analysis is
Adequacy. .832 suitable. The KMO ranges from 0-1, with
Bartlett's Test of Approx. Chi-Square 1030.136 higher values indicating greater suitability.
Sphericity df 78 Ideally you want this value to be greater than
Sig. .000
0.7. You also want Bartlett’s statistic to be
significant. In this case the KMO is greater than 0.7 at 0.832 and Bartlett’s is significant
[χ2(78)=1030, p<0.001] and therefore it seems that factor analysis is suitable for this data
set. The next table gives you the communalities for each of the variables that you have
entered into the analysis (not shown here). The communality given in the extraction
column represents the proportion of shared variance for each variable. So for example
we can see that “I like my classmates” shares 80.1% of its variance with other variables.
Total Variance Explained

Initial Eigenvalues Extraction Sums of Squared Loadings Rotation Sums of Squared Loadings
Component Total % of Variance Cumulative % Total % of Variance Cumulative % Total % of Variance Cumulative %
1 4.792 36.859 36.859 4.792 36.859 36.859 4.065 31.268 31.268
2 3.341 25.700 62.559 3.341 25.700 62.559 3.975 30.575 61.843
3 2.376 18.281 80.840 2.376 18.281 80.840 2.470 18.997 80.840
4 .419 3.220 84.059
5 .367 2.824 86.883
6 .326 2.509 89.393
7 .265 2.037 91.430
8 .241 1.851 93.281
9 .224 1.726 95.008
10 .209 1.607 96.614
11 .185 1.422 98.037
12 .137 1.054 99.091
13 .118 .909 100.000
Extraction Method: Principal Component Analysis.

Next is the “Total Variance Explained” table (shown above). Initially the factor analysis
extracts as many factors as there are variables, however, when running the analysis, you
told it to only extract factors that had eigenvalues above 1. From looking at the

Dr Sam Knowles (skzk20@susx.ac.uk) 2

“Extraction Sums of Squared Loadings” you can see that only three factors have been
extracted with eigenvalues over one, with the fourth factor having an eigenvalue of only
0.419. Remember also that we told SPSS to use a Varimax rotation to improve the
extraction of factors. The values given for each factor after rotation is given in the
“Rotation Sums of Squared Loadings”. This section of the table will be used for the next
section of interpretation. We know that this analysis has extracted three factors, and here
we can see that factor one has an eigenvalue of 4.065 and accounts for 31.27% of the
variance, factor two has an eigenvalue of 3.975 and accounts for 30.58% of the variance
and factor three has an eigenvalue of 2.470 and accounts for 19% of the variance. In total
the three factors account for 80.84% of the variance in the questionnaire and therefore
seem to be a good representation of the original data set. The scree plot (not shown here)
can also be used to confirm that three factors have been extracted. In this graph each
point represents one of the factors, plotted along the X-axis, with its eigenvalue plotted
up the Y-axis. From this you can see that three factors have eigenvalues over one.

Next you are given the component matrix (not shown here). This tells you how much
each manifest variable loads onto each of the three latent variables before rotation. Every
variable loads onto each of the three factors, but remember that we told SPSS to
suppress loadings less that 0.40 when running the analysis, therefore the blanks are
actually small loadings. Next is the rotated components matrix (shown below), which
Rotated Component Matrixa gives the same information but after
rotation. This is the table that tells you
1 2 3 which variables map onto which factors
my parents say I have to
go to school
.905 most significantly and in size order. From
I am driven to school
this matrix we can see that factor one
when it rains
my siblings go to the includes five variables, as does factor two,
same school whereas factor three is only comprised of
My parent(s) have meet
my teachers
.887 three variables. Try to look at the questions
my classroom is well
.883 in each factor and see if you can give each
maths is enjoyable .890 one a name. Are there questions placed
Science is neat .890 into a factor that don’t seem to belong
I miss school when I am
.890 there? In this analysis each of the variables
I like my classmates .885 fall neatly into separate factors. Sometimes
english is enjoyable
we have good computers
SPSS will place a variable into more than
we have a nice play
one factor. If this occurs the variable
the toilets are clean .887
belongs in the factor to which it has the
Extraction Method: Principal Component Analysis. highest loading. Finally, take a look back at
Rotation Method: Varimax with Kaiser Normalization. your data view and you will see that three
a. Rotation converged in 4 iterations.
new variables have appeared. These
represent the factor scores for each of the three factors. These could be used for further
analysis if you wished to do so.

Exercise two

This file contains data taken from a set of items measuring dissociation or spaciness (see
Wright and Loftus (1999). Measuring dissociation: comparison of alternative forms of
the dissociative experiences scale. American Journal of Psychology 112(4): 497-519). These
measures are usually taken to represent one overall measure of dissociation. Run a factor
analysis on this data. How many factors do there seem to be?

Dr Sam Knowles (skzk20@susx.ac.uk) 3

Exercise three

This file gives data from a questionnaire given to teachers in Australia and China asking
them about various aspects of their job and how stressful they find it. Run a factor
analysis on this data, making sure you do NOT include the “location” and “teach_no”
variables. How many factors come out of this analysis? What do you think they might
represent, given traditional ideological differences between the two nations? Now run t
tests on the saved factor values to see if there are differences between teachers from
China and Australia.

Next week’s seminar

Next week we will be returning to analysis of variance. Last term we ran two forms of
ANOVA: one-way independent measures and one-way repeated measures ANOVA. In
the next seminar we will be looking at more complicated forms of ANOVA such as
analysis of covariance (ANCOVA) and two-way ANOVAs in which two independent
variables can be analysed.

Dr Sam Knowles (skzk20@susx.ac.uk) 4