Vous êtes sur la page 1sur 24

Scandinavian Journal of Educational Research Vol. 54, No.

3, June 2010, 239261

Validation of Approaches to Studying Inventories in a Norwegian Context: In Search of Quick-and-Easy and Short Versions of the ASI
Roar C. Pettersen
stfold University College
roar.c.pettersen@hiof.no RoarPettersen 0 3000002010 54 Taylor and Francis 2010 & Francis Original Article 0031-3831 (print)/1470-1170 Scandinavian Journal of Educational Research 10.1080/00313831003764511(online) CSJE_A_476973.sgm

Four validation studies of different versions of the Approaches to Studying Inventory (ASI) were performed by means of exploratory and confirmatory factor analysis. Six samples of undergraduate students participated (N = 4,038), drawn from different bachelor programs in selected university college departments. The results gave limited support to the 18-item ASI originally promoted by Gibbs et al. in 1988, whereas a modified version of the 32-item edition showed acceptable psychometric properties, as did two shortened versions of the Approaches and Study Skill Inventory for Students (ASSIST). A restructured quick-and-easy 18-item ASI emerged as reasonably valid and robust, and can be recommended as a research instrument and a practical evaluation tool in higher education. Considerations related to shortening the ASI are also discussed. Keywords: higher education, student learning, Approaches to Studying Inventory, ASI, Student Approaches to Learning, SAL

Introduction The Approaches to Studying Inventory (ASI), originally developed for research purposes (Entwistle & Ramsden, 1983), contains 64 items, grouped into four main categories, that describe variations in ways students approach and handle academic tasks in their day-to-day studying. The need for shorter versions of the inventory that could assess and monitor students approaches to studying and provide a basis for deciding on instructional interventions with the aim of getting students to develop more proficient and powerful learning patterns was soon recognized. Various short ASI editions are presently used as an element in a range of quality assurance programs and as an adjunct instrument in evaluation of teaching and learning environments (Lonka, Olkinoura, & Mkinen, 2004; Mattick, Dennis, & Bligh, 2004). Brevity is a matter of practical use in classroom contexts because, as Entwistle and McCune (2004, p. 340) point out, the longer the inventory, the less care students may take in completing it and the less likely it is that staff will use it. Access to brief, reliable and robust inventories is also critical to ensure satisfactory response rates in large research surveys, for instance, when the ASI is used in combination with other questionnaires, such

Roar C. Pettersen, Department of Health and Social Studies, stfold University College. The article has benefited greatly from the comments of anonymous referees on previous drafts. Feedback from my colleague Per Lauvs is also greatly appreciated. Correspondence concerning this article should be addressed to Roar C. Pettersen, Department of Health and Social Studies, stfold University College, 1757 Halden, Norway. E-mail: roar.c. pettersen@hiof.no
ISSN 0031-3831 print/ISSN 1470-1170 online 2010 Scandinavian Journal of Educational Research DOI: 10.1080/00313831003764511 http://www.informaworld.com

240 PETTERSEN as the Course Experience Questionnaire (CEQ) or the Assessment Experience Questionnaire (Gibbs & Dunbar-Goddet, 2007; Lizzio, Wilson, & Simons, 2002; Wilson, Lizzio, & Ramsden, 1997). The present study compared several versions of the ASI to validate these instruments. The main focus was on shorter versions, especially so-called quick-and-easy versions with 18 items only. The Student Approaches to Learning (SAL) Tradition The ASI, developed within the Student Approaches to Learning (SAL) tradition, is probably the most widely used instrument for researching student learning in applied settings. The SAL perspective, primarily developed in Europe and Australia, represents, together with the information processing (IP) tradition, two of the main strands of research on student learning over the last few decades. The IP perspective is mainly rooted within the US context of educational research; both traditions are influenced by the cognitive revolution in the 1970s but are founded on somewhat different theoretical sources. The SAL framework is usually depicted as a bottom-up, practice-based model derived from naturalistic experiments and in-depth interviews with students about learning and motivation in real educational contexts (Biggs, 1993, 2001; Marton & Slj, 2005). Conversely, the IP perspective represents a top down model grounded on core concepts from cognitive and self-regulated learning (SRL) theories. The model focuses primarily on learning strategies, and meta-cognitive and self-regulation skills conceived as cognitive processes within the individual learner (cf. Boekaerts, 1999; Pintrich, 2004; Weinstein, Husman, & Dierking, 2000). More recently, attempts have been made to merge ideas and concepts derived from both the SAL and IP/SRL perspectives (Entwistle & McCune, 2004; Lonka et al., 2004). Surface and deep approaches to learning (Marton & Slj, 2005) are the core concepts in most ASI editions, together with a strategic or achieving approach (Entwistle & Ramdsen, 1983). In short, students adopting a deep approach intend to maximize meaning by relating what is being learnt to previous knowledge and their own experience in an active and critical manner. Students applying a surface approach will primarily direct their attention toward the learning materials by memorizing facts and ideas and applying rote learning strategies. The strategic approach relates to students intention to seek high grades by various means, trying to optimize success in assessment by using well-organized study methods and ensuring effective time management, often with an element of cue seeking with the aim of getting through with minimal effort (cf. Miller & Parlett, 1974). The concepts refer essentially to students learning intentions and motives. Approaches to learning in this sense evolve via students personal perceptions of and responses to the actual teaching and learning environment. The basic design of most ASI versions comprises the learning intentions and associated strategies given in Table 1 (adapted from Entwistle, McCune, & Walker, 2001; cf. Coffield, Moseley, Hall, & Ecclestone, 2004). A wide range of previous inventory-based studies have successfully identified the deepsurface dichotomy as two distinct and global factors and as contrasting and incompatible ways of cognitive processing (learning skills/strategies). However, there has been some controversy regarding the integrity and conceptual status of the strategic dimension, partly because a number of validation studies have failed to reproduce fully the strategic approach

VALIDATION OF APPROACHES TO STUDYING INVENTORIES 241


Table 1 Approaches to Learning and StudyingAn Overview
Deep approach Intention: Seeking meaning in order to understand ideas for yourself by: Relating ideas to previous knowledge and experience Looking for patterns and underlying principles Checking evidence and relating it to conclusions Examining logic and argument cautiously and critically Being aware of understanding developing while learning Becoming actively interested in the course content Surface approach Intention: Reproducing in order to cope with course requirements by: Treating the course as unrelated bits of knowledge Memorising facts and carrying out procedures routinely Finding difficulty in making sense of new ideas presented Seeing little value or meaning in either courses or tasks set Studying without reflecting on either purpose or strategy Feeling undue pressure and worry about work Strategic approach Intention: Reflective organising to achieve the highest possible grades by: Putting consistent effort into studying Managing time and effort effectively Finding the right conditions and materials for studying Monitoring the effectiveness of ways of studying Being alert to assessment requirements and criteria Gearing work to the perceived preferences of lecturers

as a consistent and universal factor in learning and studying. The ambiguity may also be rooted in the strategic dimensions primary focus on motivational processes (the will to learn). Consequently, a deep as well as a surface approach will go together with a strategic approach, both theoretically and practically (Biggs, Kember, & Leung, 2001; Kember & Leung, 1998; Meyer & Parsons, 1989; Richardson, 1990, 1994, 2000). However, the strategic approach is generally considered crucial because a deep approach on its own may not be carried through with sufficient determination and effort to reach deep levels of understanding (Entwistle et al., 2001, p. 108). In terms of study persistence, the strategic dimension may represent a systematic orientation to learning, as the extra ingredient in studying that helps deep-oriented students to proceed towards graduation (Lonka et al., 2004, p. 308). In this sense, a deep approach represents a necessary but not sufficient condition for effective studying. Shortening of Inventories Some Considerations When inventories are shortened, the framing of individual items will change. Therefore, it should not be taken for granted that the psychometric status of the items will be maintained in a new inventory context as each items contribution to validity and reliability may change (Richardson, 2000, p. 110). The mere shortening of scales may also result in a loss in reliability (cf. the Spearman-Brown formula). The number of dimensions (subscales and scales) required to cover conceptually the main elements of studying should also be considered (Entwistle & McCune, 2004), since the shortening process unavoidably leads to limited coverage of the variations in student learning compared to the original inventory. From a practical point of view, one could ask how short the inventory can be. It is suggested here that inventories with 3036 items may be considered short when used alone

242 PETTERSEN as a research tool or evaluation instrument. Shorter inventories labeled quick-and-easy with 1824 items may primarily be convenient for practical classroom use and for research purposes in combination with other questionnaires. Several short versions of the ASI have been developed over the years, for example: (1) a 30-item version based on the original questionnaire presented by Entwistle (1981); (2) an even shorter version with 18 items advocated by Gibbs, Habeshaw, and Habeshaw (1988); (3) a 32-item version recommended by Richardson (1990, 2000) based on the most valid and reliable ASI subscales, comprising only the deep-surface dichotomy; and (4) a 30-item version of the Revised Approaches to Studying Inventory (RASI) (cf. Tait, Entwistle, & McCune, 1998). Duff (1997, 2003, 2004) recommended both the 30-item and an alternative 44-item version of the RASI as psychometrically robust instruments. The ASI in the Context of Norwegian Higher Education The SAL paradigm is well integrated into Norwegian research on teaching and learning in higher education. However, the interest has been remarkably modest when it comes to applying ASI inventories for practical as well as research purposes. Strmme (1998) advocated a translated version of the 18-item inventory (Gibbs, 1992) for practical purposes but did not present any validation data. Nor has Richardsons abbreviated 32-item version, put forward by Gibbs (1992) as a suitable alternative, yet been validated in the Norwegian context. Diseths (2001) validation of the Approaches and Study Skills Inventory for Students (ASSIST) and an abbreviated 24-item version of the ASSIST (Diseth, Pallesen, Hovland, & Larsen, 2006) seems to represent the validation efforts in the Norwegian higher education context. Main Purpose and Point of Departure The general principle constituting a common point of departure is that research instruments should be validated in each new context where they are to be used (Richardson, 2000, 2004). The rule applies in particular to the ASI, which is generally conceived as a contextdependent instrument. Students approach their learning tasks with considerable flexibility depending on how they perceive the actual teaching environment and assessment characteristics; i.e. the contextual impact on approaches is mediated by students perceptions. Then again, students are also to a considerable degree consistent in their everyday studying; approaches to learning seem also to contain stylistic qualities. Hence, approaches seem to comprise elements of both contextual variability and individual stability, so they are both context- and student-dependent (Biggs, 2001). Validation of the 18-item, quick-and-easy Norwegian version of the ASI recommended by Gibbs et al. (1988) was my starting point (Study 1). In case the version should prove inadequate, as indicated by Newstead (1992) and Richardson (1992), the ambition was to compose a more psychometrically sound alternative within the same 18-item format. Validation of a translated version of the short ASI advocated by Richardson (1990) was my second point of departure (Study 2). Third, to compose an alternative 18-item version, I needed an extended pool of relevant and validated ASI items as Richardsons version only focuses on the deep-surface dimension. The idea was that the validation of the ASSIST would provide an appropriate pool of items and would concurrently fulfill the wish to cross-validate the full ASSIST with reference to Diseths (2001) study. The aim

VALIDATION OF APPROACHES TO STUDYING INVENTORIES 243 was also to investigate the psychometric properties of shortened versions of the ASSIST (Study 3). Finally, the composition and validation of an alternative quick-and-easy inventory based on selected items from the original ASI and the ASSIST represented the ultimate purpose (Study 4). For whatever reasonstylistic qualities or cross cultural similaritythe ASI seems to be a fairly portable research and evaluation tool. Several studies have indicated that various versions are rather consistent across contrasting systems and cultures in higher educational settings, at least within most Western cultures. Thus, it was expected that the overall psychometric features emerging in the Norwegian studies would not deviate substantially from results previously obtained in the UK context (Entwistle, Tait, & McCune, 2000; Richardson, 1990, 2005; Tait et al., 1998).

Methods Context and Participants University college students enrolled in three-year professional bachelor programs participated in the studies. The total sample included first-, second- and third-year students studying social work, nursing and biomedical laboratory science, totaling 4,038 respondents. The study programs in the actual departments were designed and delivered in accordance with problem-based learning principles. The respondents were in this sense drawn from an educational culture sharing essential curricular and contextual features. However, when it comes to the design of the specific courses and programs, there are variations, as Abrant Dahlgren (2000) pointed out. Even when educators and teachers are referring to problem-based learning as a shared curricular framework, the actual educational practices will often surface as different portraits of problem-based learning. Materials and Procedure The ASI versions used in the four studies have several features in common since both the 18- and the 32-item versions are adapted from the original ASI. In addition, more than half of the 52 items comprising the revised ASI, the ASSIST, are reiterations of original ASI items; some are paraphrased, while others are repeated verbatim. In all versions, students responded on a 5-point Likert scale to indicate the extent of agreement or disagreement with each item. The ASSIST uses 5 as the maximum score (totally agree) and 1 as the minimum (totally disagree), while the ASI scales range from 4 to 0. Table 2 gives an overview of the main structure of the ASI and ASSIST. Each subscale has four items when nothing else is given. The fourth original ASI dimension, addressing styles and pathologies derived from the work of Pask (1976), is either excluded or re-conceptualized in most revised ASI versions. Table 2 does not include the most recent ASI revision, Approaches to Learning and Studying Inventory (ALSI), developed within the ETL project (Enhancing TeachingLearning Environment in Undergraduate Courses; see: www.etl.tla.ed.ac.uk; cf. Entwistle & McCune, 2004). The ALSI comes in two formats, a 36-item and an 18-item version. External validation studies of the inventories are limited, however; for an evaluation of the 18-item version, see Mattick et al. (2004).

244 PETTERSEN
Table 2 The Structure of the ASI and the ASSIST
ASI Scale Subscale Meaning orientation Deep Approach Relating Ideas Use of Evidence Intrinsic Motivation Reproducing orientation Surface Approach (6 items) Syllabus-Boundness (3 items) Fair of Failure (3 items) Extrinsic Motivation Achieving orientation Disorganised Study Methods** Achievement Motivation Strategic Approach Negative Attitudes to Studying Styles and pathologies of learning Comprehension Learning Globetrotting Operational Learning Improvidence ASSIST Deep Approach Seeking Meaning Relating Ideas Use of Evidence Interest in Ideas Surface Approach Unrelated Memorising Syllabus-Boundness Fear of Failure Lack of Purpose* Strategic Approach Organised Studying Achieving Time Management*** Alertness to Assessment Demands Monitoring Effectiveness

Scale Subscale

Scale Subscale

Scale Subscale

Notes. *This subscale contains items from the Negative Attitudes to Studying achieving subscale of the ASI. **Reversed scoring. ***Some of the items from the Disorganised Study Methods subscale are integrated in Time Management. The subscale is integrated as one of the subscales in the Meaning orientation scale in ASI 32, some of the items are integrated in the Relating Ideas subscale in the ASSIST. The subscale is integrated as one of the subscales in the Reproducing orientation scale in ASI 32.

Translation of the inventories was done by the author using a standard back-translation procedure to ensure that the meaning content of the individual items would match the English versions. The students completed the ASI/ASSIST sheets at the end of a lecture or tutorial under the authors direction or as part of the departments annual student evaluation of the teachinglearning environment, where the learning questionnaire was part of a more extensive evaluation package. Explanatory factor analysis (EFA: SPSS 16.0, 2007) was used in all studies as the major validation approach. Fit indices are given as additional information attained by confirmatory factor analysis using AMOS (included in the SPSS package). The AMOS specifies several indices to assess the fit of a given model to the data, including chi-square (X2) with associated degrees of freedom (df) and probability level (p), comparative fit index (CFI) and the root mean square error of approximation (RMSEA) (cf. Arbuckle, 1999; Blunch, 2008; Byrne, 2001; Costello & Osborne, 2005; Osborne & Costello, 2004; Hair, Black, Babin, & Anderson, 2006; Schumacker & Lomax, 2004).

VALIDATION OF APPROACHES TO STUDYING INVENTORIES 245 Analysis and Results Study 1 The 18-Item ASI The theoretical structure of the 18-item ASI comprises three study orientations, achieving (or strategic), meaning, and reproducing, with six items defining each scale. A total of 426 students (72% females) from eight different classes completed the inventory during lectures in a problem-based module focusing on learning and educational theory during the academic years 20002002. The mean age was 27.9 years. Table 3 gives the factor solution of the present study together with significant loadings derived from Newsteads (1992) and Richardsons (1992) studies for comparison, the latter resulting in a four-factor solution. The eigenvalue > 1 criterion (Kaiser, 1960) initially produced six factors, while the scree test (Cattell, 1966) indicated a three- or four-factor
Table 3 ASI 18 Factor Loadings and Values for the Present and Previous Validation Studies
Extracted factors Sub-scales Orientation Strategic (1) Strategic (2) Strategic (3) Strategic (4) Strategic (5) Strategic (6) Reproducing (1) Reproducing (2) Reproducing (3) Reproducing (4) Reproducing (5) Reproducing (6) Meaning (1) Meaning (2) Meaning (3) Meaning (4) Meaning (5) Meaning (6) DSM1 AM2 ST2 DSM4 ST3 AM3 SB1 SA3 EM4 SA6 SB3 SA2 DA3 IM1 DA1 DA2 IM3 IM4 PS .64 .38 .61 .70 Factor 1 New .66 .33 .39 .56 .64 .36 .27 .53 .27 .33 .54 .53 .53 .61 .40 .50 .46 .46 .57 .59 .32 .54 .32 .55 .61 .67 Meaning .59 .61 .50 .21 .46 .21 .28 .52 .70 Rich .67 .29 .75 .42 .27 .36 .52 .24 .23 .20 .30 .28 .30 .58 .40 .66 .52 .58 .51 .24 .42 .43 .33 .69 .43 .58 .56 .34 .44 .41 .40 .62 .24 .34 .25 PS Factor 2 New Rich PS Factor 3 New Rich Factor 4 Rich

.25 .58 .32

.38 .36

.23

.21 Reproducing .38 .44 .50 n.a. n.a. .50

Reliability, Cronbachs alpha Present study Newstead (1992) Richardson (1992)

Strategic .49 .50 .50

Notes. n = 426; 36.2% variance explained. Factor loadings less than .20 are omitted in Table 3. n.a. = not applicable. DSM = Disorganised Study Methods; AM = Achievement Motivation; ST = Strategic Approach; SB = SyllabusBoundness; SA = Surface Approach; EM = Extrinsic Motivation; DA = Deep Approach; IM = Intrinsic Motivation.

246 PETTERSEN solution. Neither orthogonal nor oblique rotation of the four-factor solution produced interpretable matrices, thus supporting a three-factor solution. Oblique rotation should be considered appropriate due to correlated factors, but since alternative methods did not display any substantial differences, orthogonal rotation was chosen to make the comparison with Newsteads (1992) study more transparent. Table 3 shows significant factor loadings matching the constituent structure of the ASI 18 in bold. The chosen cut-off point for significance is loadings .40. The constituent scales of the ASI may be identified as follows when the .40 criterion is applied together with the constraint that differences of possible cross loadings should not exceed .20 (Osborne & Costello, 2004):

Factor 1 identifies strategic orientation by displaying significant loadings on three of the defining items. Factor 2 identifies meaning orientation due to significant loadings on four relevant items. Factor 3 identifies the reproducing orientation by showing significant loadings on three relevant items.

In sum, only 10 of the items load exclusively on factors in line with the intended structure. The remaining 8 show a more erratic and incoherent pattern, of which 2 items display substantial factor loadings to unintended factors (ST2, SA3). The internal consistency values for all three scales are low, especially for the reproducing scale, = .38. Taken together with the incomplete replication of the intended factor structure, this quick-and-easy version of the ASI is considered too rough from a psychometric point of view, and a more sophisticated version is needed (cf. Newstead, 1992; Richardson, 2000). Fit indices obtained by confirmatory factor analysis (AMOS) support this conclusion. The comparative fit index (CFI) is .615 but should exceed .92 to indicate a good fit to the data, and likewise, the value of the root mean square error of approximation (RMSEA) is .080, whereas values .060 are required to indicate a good fit by general guidelines (cf. Blunch, 2008; Byrne, 2001; Hair et al., 2006, p. 753; Schumacker & Lomax, 2004). Study 2 The 32-Item ASI Richardson (1990, 2000) suggested an alternative strategy for shortening the original ASI by focusing exclusively on the subscales that previously have been empirically identified with the main orientations: meaning, and reproducing, respectively. Eight subscales comprise the ASI 32, of which four define Meaning Orientation: Deep Approach, Interrelating Ideas, Use of Evidence, and Comprehension Learning (4 items each). The latter is linked to styles and pathologies of learning in the original ASI (cf. Table 2). The Reproducing Orientation scale contains the Surface Approach (6 items), Fear of Failure (3 items), Syllabus-Boundness (3 items), and Improvidence (4 items); the latter is also drawn from the styles and pathologies orientation. In the original validation study of the ASI 32, Richardson (1990) argued that most studies on the conceptual validity of the ASI focused exclusively on students subscale scores to identify the major learning orientations. Thus, the empirical integrity of the subscales is more or less taken for granted. Hence, Richardson (1990) chose a validation strategy based on item scores to check the match between extracted factors and the constituent subscales.

VALIDATION OF APPROACHES TO STUDYING INVENTORIES 247 Subsequently, data on the subscale level were submitted to factor analysis to document the presence of the basic two-scale design of the ASI 32. The present study uses a similar approach, with additional information provided by confirmatory factor analysis. The translated ASI 32 was administered to seven different classes in social work and nursing programs over the academic years 20012003. The sample comprised 457 respondents (82% females) with a mean age of 25.7 years. The eigenvalue > 1 and scree tests both indicated eight factors to be extracted. Table 4 shows the pattern matrix obtained by principal component analysis and oblique rotation.
Table 4 ASI 32 Factor Loadings on Item Level
Main scales/Orientation Subscales Item no. 1 Meaning Meaning Meaning Meaning Meaning Meaning Meaning Meaning Meaning Meaning Meaning Meaning Meaning Meaning Meaning Meaning Reproducing Reproducing Reproducing Reproducing Reproducing Reproducing Reproducing Reproducing Reproducing Reproducing Reproducing Reproducing Reproducing Reproducing Reproducing Reproducing DA (1) DA (2) DA (3) DA (4) RI (1) RI (2) RI (3) RI (4) CL (1) CL (2) CL (3) CL (4) UE (1) UE (2) UE (3) UE (4) SA (1) SA (2) SA (3) SA (4) SA (5) SA (6) FF (1) FF (2) FF (3) IP (1) IP (2) IP (3) IP (4) SB (1) SB (2) SB (3) 11 5 2 18 1 14 24 29 3 10 16 22 20 17 28 31 8 21 15 23 9 19 6 13 27 25 7 32 30 4 12 26 2 3 Extracted factors 4 .24 .44 .51 .22 .33 .56 .21 .68 .54 .75 .73 .21 .30 .23 .36 .32 5 6 .56 .42 .29 .20 .32 7 8

.22 .56 .25 .31 .36 .67 .22

.21

.21

.31 .42

.50 .21 .56 .59 .21

.31

.23 .34

.40 .33 .84 .36 .53 .35 .28 .25 .34

.22 .59 .68 .62 .55 .55 .30 .30

.20 .41 .42 .69 .44 .55 .63 .71 .51

Notes. N = 457; 50.6% variance explained. DA = Deep Approach; RI = Relating Ideas; CL = Comprehensive Learning; UE = Using Evidence; SA = Surface Approach; FF = Fear of Failure; IP = Improvidence; SB = Syllabus-Boundness.

248 PETTERSEN Loadings less than .20 were omitted, and loading values .40 are given in bold to indicate significance. The subscale numbering refers to the original ASI (Entwistle & Ramsden, 1983). The eight extracted factors exhibit significant loadings ( .40) for 28 out of the 32 items. However, the general picture also reveals several cross loaders in the range of .20.35, of which several exhibit loadings larger than .35 on more than one factor. Thus, the pattern matrix does not support an overall clear and complete match between the extracted factors and the constituent subscale design. Only the Comprehensive Learning subscale is clearly identified, given that Factor 3 displays significant loadings (> .50) for all relevant items. In addition, it might be argued that Factor 1 identifies Fear of Failure as it shows significant loadings for each item, even though the factor includes an additional item from both the Surface Approach and Improvidence. Likewise, one might claim that Factor 2 to some extent matches Using Evidence, although this factor also includes one item each from Deep Approach and Relating Ideas. Factor 4 and Factor 5, on the other hand, disclose relatively atypical and problematic cross-loading patterns. Both factors have significant loadings to items associated with both Reproducing and Meaning Orientation (i.e. items DA3 and SB2, and RI2 and SA3, respectively). Moreover, none of the remaining factors exhibit consistency with the intended subscale structure. In the original validation study, Richardson (1990) concluded that: Broadly speaking, there is a clear match between the eight extracted factors and the eight subscales (p. 163). The present study does not support his conclusion. The statement is considered, in general, to be based on insufficiently stringent psychometric criteria given that only one or two relevant items defined seven of the eight subscales (significant loadings .40) in Richardsons original validation study. Then again, both studies pointed to Comprehension Learning and Fear of Failure as the most cohesive subscales. The internal consistency for both main scales is considered acceptable, with alpha values of .75 and subscale values ranging from .24 to .71 (Table 5). The exceptionally
Table 5 ASI 32/30 Factor Analysis on Subscale Level
32-item ASI Factors Scales Deep Approach Relating Ideas Comprehension Learning Using Evidence Meaning Orientation Surface Approach Fear of Failure Improvidence Syllabus-Boundness Reproducing Orientation 1 2 .79 .57 .56 .79 .47 .31 .71 .49 .75 .48 .51 .45 .24 .75 Alpha 1 30-item ASI Factors 2 .79 .77 .61 .77 .79 .75 .79 .67 .47 .47 .71 .49 .76 .54 .51 .45 .24 .77 Alpha

.36

.77 .74 .80 .65

Note. Factor loadings less than .20 are omitted. n = 457.

VALIDATION OF APPROACHES TO STUDYING INVENTORIES 249 weak reliability of the Syllabus-Boundness subscale ( =.24) will be addressed below (cf. Study 3). Two of the more problematic items were omitted to strengthen the translated version psychometrically, one from each of the main orientations: items SA3 (no. 15) and RI3 (no. 24). The eigenvalue and the scree test specified two factors to be extracted for both versions when data on the subscale level were submitted to exploratory factor analyses (principal component analysis and oblique rotation). The variance accounted for was 53.6% (ASI 32) and 56.9% (30-item version), respectively, and Table 5 gives both pattern matrices. The cross loading from Factor 1 (Reproducing Orientation) to Relating Ideas impedes to some extent a full match between the extracted factors and the main dimensions of the ASI 32, whereas the modified 30-item version displays an appropriate match between extracted factors and major learning orientations. The 30-item version also shows slightly better internal reliability ( value) regarding both the main scales and modified subscales, Relating Ideas and Surface Approach. The two main factors are slightly negatively correlated (r = .11). Fit indices attained by confirmatory factor analysis support the 30-item ASI as a suitable instrument for both research and practical purposes in higher education (CFI = .989 and RMSEA = .054). The 32-item version did not reach index values indicative of an acceptable fit to the observed data (CFI = .881; RMSEA = .097). Study 3 The ASSIST, Full and Abbreviated Versions The ASSIST is a major revision of the 60-item RASI and contains 52 items constituting 13 subscales, each with four items distributed over the deep, surface, and strategic scales (Tait et al., 1998; cf. Tables 1 and 2). The inventory was previously validated in a number of study contexts (e.g. Entwistle et al., 2000; Kreber, 2003; Richardson, 2005; Richardson et al., 2007). A Norwegian version has been validated (Diseth, 2001) with a sample of 573 first-year university students enrolled in rather conventionally organized courses in psychology, sociology, theory of science, and history of philosophy. In a wider Scandinavian context, Richardson, Gamborg, and Hammerberg (2005) tested a Danish version of the ASSIST with 164 second-year students enrolled in university college programs in occupational therapy at six different institutions. In the present study, a sample of 328 first-year students responded to a Norwegian edition of the ASSIST, translated independently of Diseths version, in the academic year 20032004. They were in their first semester of three-year bachelor programs in biomedical laboratory science, nursing, and social work. The mean age was 31.5 years (84% females). About one fourth of the sample (n = 80) completed the inventory a second time three weeks later, and the stability of the ASSIST in terms of testretest reliability was assessed with reference to this subsample. The conceptual structure of the ASSIST was first assessed on the item-level by exploratory factor analysis. Kaisers criterion specified 14 factors to be extracted, but neither a 14factor nor a 13-factor solution produced interpretable patterns in agreement with the essential subscale structure. In the 13-factor solution, 10 of the subscales were indicated by only one or two significant loadings (> .40) from relevant items; much in line with Krebers (2003) conclusion after testing the ASSIST in a Canadian context. The scree test indicated five factors to be appropriate, and a five-factor solution displayed the following pattern (the full pattern matrix is not given due to space considerations):

250 PETTERSEN

Factor 1 identifies, broadly speaking, the Strategic Approach due to significant loadings on 10 of the 12 relevant items on the subscales Organised Studying, Time Management, and Achieving. Factor 2 identifies roughly the Surface Approach by significant loadings of 7 of the 12 items defining the subscales Unrelated Memorising, Fear of Failure, and Syllabus-Boundness. Factor 3 likewise reflects the Deep Approach due to significant loadings on 9 of the 16 relevant items from the subscales Seeking Meaning, Relating Ideas, Using Evidence, and Interest in Ideas. Factor 4 shows significant loadings for 3 of 4 items defining the subscale Alertness to Assessment Demands, plus one item from Monitoring Effectiveness. Factor 5 has significant loadings to all four items constituting the subscale Lack of Purpose, with an additional significant loading to one item from Unrelated Memorizing.

In sum, there is a partial match between the three main scales and Factors 1, 2, and 3, which comprise closely interconnected but not discernable subscales. Exploratory factor analysis on the item level identified only two of the subscales to some extent: Alertness to Assessment Demands (Factor 4) and Lack of Purpose (Factor 5). Subsequently, as recommended by the designers of the ASSIST (cf. Tait et al., 1998), data on the subscale level were subjected to factor analysis. The eigenvalue > 1 criterion indicated four factors to be extracted; the scree plot test recommended a three-factor solution as the proper choice. A three-factor solution was chosen and Table 6 gives the resulting pattern matrix. The matrix shows a match between 10 of the 13 subscales and the constituent design of the ASSIST and confirms to that extent the conceptual structure of the ASSIST. The result is comparable with previous studies, with a few deviations (e.g. Diseth, 2001; Entwistle et al., 2001; Tait et al., 1998). Factor 1 holds significant loadings ( .40) to three of the five strategic subscales, mainly in agreement with the analysis on the item level. Alertness to Assessment failed to reach a significant level on any factor, in line with Diseths (2001) analysis. Monitoring Effectiveness, on the other hand, loads exclusively on Factor 2 (Deep Approach), contrary to previous studies in which the subscale seemed to fall within both the Deep and Strategic Approaches (cf. Diseth, 2001; Entwistle et al., 1998). Factor 2 identifies Deep Approach, and Factor 3 should be classified as Surface Approach; Lack of Purpose falls below the chosen cut-off level but reaches the minimum significance level ( .30) (Hair et al., 2006). One should notice that the Lack of Purpose subscale also represents the weaker subscale in other studies with loadings ranging from .37 to .47 (e.g. Diseth, 2001; Tait et al., 1998). The loading pattern of Lack of Purpose may also reflect the result obtained on the item level in the present sample, where the subscale emerged as a separate factor in the five-factor solution. The original reason for including Lack of Purpose was to broaden the scope of the surface approach by putting an emphasis on ineffective studying (Entwistle et al., 2001, p. 110). However, Lack of Purpose overlaps conceptually with Negative Attitudes to Studying, incorporated in the Strategic Approach (Achievement Orientation) in the original ASI (cf. Table 2). This may also explain the ambiguous status of the subscale. The reliability coefficients (alpha values) given in Table 7 are considered satisfactory regarding both the main scales and

VALIDATION OF APPROACHES TO STUDYING INVENTORIES 251


Table 6 The ASSIST Factor Loadings, Means and Standard Deviations, and Reliability Measures
Testretest Sub-scales/scales Organized Studying Time Management Achieving Alertness to Assessment Monitoring Effectiveness Strategic Approach Seeking Meaning Relating Ideas Use of Evidence Interest in Ideas Deep Approach Lack of Purpose Unrelated Memorizing Syllabus-Boundness Fear of Failure Surface Approach Correlations between factors Factor 1 (Strategic) Factor 2 (Deep) Factor 3 (Surface) M 13.06 14.56 15.37 13.59 16.59 73.17 15.95 14.51 14.98 16.16 61.60 7.68 9.26 13.43 13.29 47.31 SD 3.13 3.30 2.90 2.95 2.30 10.50 2.26 2.75 2.42 2.45 8.08 3.05 3.04 2.67 3.96 9.23 Alpha .62 .72 .72 .53 .49 .84 .49 .69 .43 .54 .82 .56 .54 .27 .72 .77 Reliability .75 .69 .79 .68 .62 .85 .67 .73 .65 .73 .81 .70 .63 .57 .73 .81 .22 1 .75 .96 .68 .21 .48 .71 .81 .74 .74 .30 .76 .56 .70 2 1.00 .15 3 Factors 2 3

.21

1 1.00 .53 .12

1.00

Notes. Factor loadings below .20 are omitted. Extraction: Maximum likelihood with oblique rotation. n = 328; 50.0% variance explained.

subscales, with Syllabus-Boundness as the clear exception once more (cf. Study 2). The testretest measures are considered adequate. Confirmatory factor analysis of a three-factor model of the ASSIST containing all the subscales holding significant factor loadings (. 40) supports the result obtained by EFA, as should be expected. The model assigns Monitoring Effectiveness to the Deep Approach scale, whilst Alertness to Assessment Demands and Lack of Purpose are omitted. The fit indices attained are as follows: Chi-square = 91.97 with df. = 41 and P = .000, CFI = .964 and RMSEA = .062. Abbreviated versions of the ASSIST A short version of the ASSIST that may serve as a suitable alternative to the ASI 30/32 can be obtained simply by reducing the number of subscales further. The psychometric properties of one 36-item option based on the full ASSIST sample are given in Table 8, labeled Model A. The version contains three subscales for each learning dimension, i.e., the eleven-subscale version (above) was further reduced by omitting Alertness to Assessment

252 PETTERSEN
Table 7 ASSIST 36, Optional Versions Pattern Matrices, Alpha Values and Factor Correlations
Model A Factors Scales Time Management Organized Studying Achieving Strategic Approach Unrelated Memorizing Fear of Failure Syllabus-Boundness Lack of Purpose Surface Approach Relating Ideas Using Evidence Seeking Meaning Deep Approach n Variance explained (%) Correlations Factor 1 Factor 2 Factor 3 1.00 .29 .50 1.00 .23 1 .98 .77 .71 .73 .79 .53 n.a. .77 .74 .71 328 72 1.00 .54 .32 2 3 Model B Factors Model C Factors

.72 .62 .72 .87 .54 .72 .27 n.a. .76 .59 .43 .49 .77

1 .96 .84 .70

.80 .68 .75 .90

1 .95 .83 .68

.80 .67 .67 .88

.67 n.a. .45 .63 .77 .89 .72 411 72.9

.56 n.a. .33 .53 .68 .70 .57 .60 .84

.91 .62 n.a. .42 .79 .76 .74 624 59.2 1.00 .24 .49

.56 .80 n.a. .67 .80 .67 .62 .55 .82

1.00

1.00 .35 1.00

1.00 .26

1.00

Notes. Factor loadings less than .20 have been omitted from the Table. n.a. = not applicable.

Demand and Interest in Ideas. As expected, confirmatory factor analysis also supported this version as a reasonably good fit to the data (Chi-square = 97.96 with df. = 24 and P = .003, CFI = .979, RMSEA = .055). Two additional versions of a 36-item ASSIST were tested with separate samples to explore the conceptual validity and internal reliability of the subscales Syllabus-Boundness and Lack of Purpose more extensively. The alternative compositions differ only with regard to the Surface Approach; both versions contain the subscales Lack of Purpose and Unrelated Memorizing. The version labeled Model B has Syllabus-Boundness as the third subscale, whereas Fear of Failure represents the third scale in the version labeled Model C (cf. Table 7). The Model B sample (n = 411) comprised students (78% females) with a mean age of 29.4 years equally distributed across three years of study in nursing and social work programs. The Model C sample (n = 624) followed similar educational programs, the majority in their third year of study; the mean age was 30.5 years. The features emerging from explanatory analysis on the item level for both samples basically supported the picture from the ASSIST (52) study (complete pattern

VALIDATION OF APPROACHES TO STUDYING INVENTORIES 253


Table 8 The Alternative 18-Item ASI Items, Factor Loadings, Alpha Values, and Factor Correlations
Factors Item no 1 I manage to find conditions for studying which allow me to get on with my work easily (OS) 3 I put a lot of effort into studying because Im determined to do well (ACH) 6 I think Im quite systematic and organized when it comes to revising for exams (OS) 9 I work steadily through the term or semester, rather than leaving it to the last minute (TM) 12 I organize my study time carefully to make the best use of it (TM) 15 I dont find it at all difficult to motivate myself (ACH) 2 I find I have to concentrate on memorizing a good deal of what we have to learn (SA/ UM) 5 The continual pressure of workassignments and deadlinesoften make me tense and depressed (FF) 8 Although I generally remember facts and details, I find it difficult to fit them together into an overall picture (IP) 14 A poor first answer in an exam often makes me panic (FF) 16 Often I find I have to read things without having a chance to really understand them (SA) 18 I find it difficult to switch tracks when working with a problem: I prefer to follow each line of thought as far as it will go (IP) 4 I usually set out to understand the meaning of what I am asked to learn (DA/SM) 7 I like to play around with ideas of my own even if they dont get me very far (CL/RI) 10 When I am reading I stop from time to time to reflect on what I am trying to learn from it (SM) 11 Puzzles and problems fascinate me, particularly when you have to work through the material to reach a logical conclusion (UE) 13 Often when Im reading books, the ideas produce associations and vivid images which sometimes take on a life of their own (CL/RI) 17 When I am reading an article of a book, I try to find out for myself exactly what the author means (SM) Variance explained for each of the factors (%) .37 .32 .21 Strategic approach Surface approach Deep approach 1 .57 .69 .83 .83 .70 .48 .57 .57 .67 .52 .70 .57 .42 .60 .49 .63 .61 .51 28.1 10.8 5.4 2 3

Correlation matrix Factor 1 Factor 1 Factor 2 Factor 3 Cronbachs alpha 1.00 .09 .41 Strategic approach .86 Factor 2 Factor 3

1.00 .08 Surface approach .78

1.00 Deep approach .78

Notes. n = 1,792; 44.2% variance explained. OS = Organised Studying; ACH = Achieving; TM = Time Management; SA = Surface Approach; UM = Unrelated Memorizing; FF = Fear of Failure; IP = Improvidence; DA = Deep Approach; SM = Seeking Meaning; CL = Comprehension Learning; RI = Relating Ideas; UE = Use of Evidence.

254 PETTERSEN matrix omitted due to space consideration). Visual inspection of the scree plots suggested a four-factor solution for both versions, which produced pattern matrices for both versions that significantly identified the Strategic and Deep Approaches by 21 of the 24 items from the relevant subscales. The surface items constituted the two remaining factors in both models. As to Model B, all items from Unrelated Memorizing identified one factor, whilst three items from Lack of Purpose pigeonholed the other. None of the Syllabus-Boundness items reached the cut-off point of > .40 for significance. Likewise, for Model C, all items comprising Fear of Failure, together with one from Unrelated Memorizing, identified one factor related to the surface dimension, while the other displayed significant loadings on all of the Lack of Purpose items plus one from Unrelated Memorizing. Exploratory factor analysis on the subscale level (maximum likelihood and oblique rotation) generated the pattern matrices for Models B and C given in Table 7. The loading patterns for all three models comply with the constituent design of the actual short version of the ASSIST. Despite the relatively low internal reliability of the surface scale in Model B, which cannot be considered fully acceptable, the optional versions of the ASSIST 36 emerge as fairly robust inventories when assessed by explanatory factor analysis. However, Models B and C did not produce fit indices indicating an acceptable fit to the data when subjected to confirmatory factor analysis as straightforward three-factor solutions; the RMSEA index exceeded .080 for both versions. Then again, based on information from the EFA on the item level, Model C was specified as a four-factor model with Lack of Purpose as a separate latent factor positively related (r = .35) to the modified Surface Approach (comprising Unrelated Memorizing and Fear of Failure) and negatively associated (r = .22) with the Strategic Approach. With these modifications, Model C produced more appropriate indices, i.e.: Chi-square = 174.66 (df = 48, p = .000), CFI = .957, RMSEA = .065. Study 4 An Alternative Quick-and-Easy Version of the ASI Taken together, Studies 1, 2, and 3 give information on how a range of ASI items contribute to the reliability and validity of the scales in different inventory contexts. The information was considered and an alternative 18-item ASI composed as follows: The strategic approach scale contains six items drawn from ASSIST, and the surface approach scale includes six items derived from Richardsons ASI version, i.e., the original ASI. The deep approach scale has four items drawn from the original ASI combined with two items from
Table 9 Criterion Validity of the Alternative ASI (Pearson r)
Academic Achievement (AA) Deep Approach Surface Approach Strategic Approach Note. Signicance level: p .01. .24 .30 .17 Learning outcome (GSS) .37 .14 .33 Perceived teaching quality (CEQ) .27 .38 .26

VALIDATION OF APPROACHES TO STUDYING INVENTORIES 255 the ASSIST. Table 9 gives the items verbatim; the abbreviations in parentheses indicate the subscales from which the items were drawn. One should notice that some items are assigned different subscale labels in the ASI and the ASSIST. A total of 330 university college students, one half in their second and the other half in their fifth semester, participated in a pilot study with the alternative 18-item version. Explanatory factor analysis based on their responses indicated a reasonably robust factor structure. The strategic and surface approach scales were clearly identified (cut-off level .40), whereas two of the deep approach items showed cross loadings that impeded the full integrity of the scale. The reliability coefficients (alpha) in the pilot study ranged from .67 to .78 (Pettersen, 2004). Over three successive years, 20042007, four additional samples completed the inventory, expanding the total sample to 1,792 students (79% females) with a mean age of 28.3 years. Table 8 displays the pattern matrix from exploratory factor analysis (maximum likelihood and oblique rotation) based on a three-factor solution as indicated by both the eigenvalue > 1 and scree test criteria (loadings < .20 are omitted). Factors 1 and 2 identify the strategic and the surface approaches, respectively, in a reasonably straightforward manner. Two cross loaders (items no. 4 and no. 10) prevent an unambiguous identification of the deep approach scale (Factor 3), which to some extent may reflect the close relationship (r = .41) between Factor 1 (strategic approach) and Factor 2 (deep approach). However, the overlap and close interconnection between the two domains is conceptually comprehensible. The internal consistency for the three scales is considered highly satisfactory, with alpha values ranging from .86 (strategic) to .78 (deep and surface). Fit indices generated by confirmatory factor analysis (AMOS) support the alternative 18-item version as a suitable research and evaluation instrument. The model subjected to CFA allows items 4 and 10 to cross load on Factor 1 (Strategic Approach) in agreement with the result of the explanatory analysis and gives these indices: Chi-square = 901.1 (df = 129; p = .000), CFI = .929 and RMSEA = .058, which indicate an acceptable level of model fit according to general guidelines (cf. Study 1). A Note on Criterion-Related Validity Criterion-related validity of the ASI is commonly assessed with reference to these measures: (1) Perceived teaching quality, as measured with the CEQ (cf. Wilson et al., 1997); (2) Assessment results to indicate academic achievement (AA); (3) The Generic Skill Scale (GGS) included in the CEQ as a measure of (perceived) learning outcome (cf. Lizzio et al., 2002). The use of these parameters is in accordance with the basic claims advocated by researchers within the SAL paradigm. The core idea is that students perceptions of teaching quality will have a salient effect on how they orchestrate their learning approaches. How students combine the main approaches to learning and studying will to some extent predict academic achievement and learning outcomes. Several previous investigations with versions of the ASI/RASI/ASSIST support these premises (e.g. Backhaus & Liff, 2007; Diseth, 2007; Duff, 2003, 2004; Lawless & Richardson, 2002; Lizzio et al., 2002; Mattick et al., 2004; Newstead, 1992; Richardson, 2006; Wilson et al., 1997). Research reports typically present significant correlations between ASI scores, perceived teaching quality (CEQ), learning outcomes (GGS), and academic achievement (AA) (Pearson r ranging from .15 to .45).

256 PETTERSEN The assessment of criterion-related validity used responses from one subsample (n = 438) who completed a six-scale version of the CEQ (Pettersen, 2007) together with the alternative ASI 18. The sample included students in social work, nursing and biomedical laboratory science programs who also provided their examination marks attained during the year. The majority reported marks from two different exams (n = 257), while the rest (n = 181) informed about the single last mark obtained. The information was transferred to the Academic Achievement variable, a 6-point scale ranging from 6 (excellent) to 1 (fail). The learning outcome scale (Generic Skill Scale) and the teaching quality scale (CEQ) both ranged from 5 (high) to 1 (low), where the average CEQ score signifies perceived teaching quality (including the subscales Good Teaching, Appropriate Assessment, Appropriate Work Load, Clear Goals and Standards, and Freedom in Learning). Table 9 shows the associations between ASI scores and the criteria. The predictive and convergent validity of the alternative ASI 18 comply at large with results previously obtained with different ASI versions. However, the overall picture emerging from both the present study and previous research shows that neither approach, as such, constitutes a strong predictor of academic achievement. It should also be noticed that several studies point to the strategic approach and/or surface approach as the stronger predictor. The surface and deep approaches hold this position in the actual teaching and learning environment (cf. Diseth, 2007; Duff, 2003, 2004; Newstead, 1992; Reid, Duvall, & Evans, 2007).

The Four Studies Briefly Summarized Study 1. Study 1 supports the critical evaluations regarding the 18-item ASI (Gibbs et al., 1988) presented by Richardson (1992, 2000) and Newstead (1992). The evidence from both explanatory and confirmatory factor analyses emphasizes the Norwegian version of the inventory as inadequate from a psychometric viewpoint to be recommended as a research tool, and it is still too crude even for use as a classroom evaluation tool. Study 2. Analysis of the ASI 32 on the item level failed to identify its conceptual structure comprising eight subscales; only three scales emerged in a reasonably clear manner. A 30-item version demonstrated satisfactory psychometric properties assessed by explanatory factor analysis on the subscale level, which were supported by the fit indices generated by confirmatory factor analysis. The main scales showed acceptable reliability ( values) comparable to previous results (Lawless & Richardson, 2002; Richardson, 1990). The 30-item version is considered suitable for researching student learning in further investigations. Study 3. The analysis of the ASSIST revealed psychometric features in general commensurable to those obtained in previous research in the UK (e.g. Entwistle et al., 2000) and the Norwegian contexts (Diseth, 2001). However, the analyses on the item level drew attention to the low reliability of the Syllabus-Boundness subscale, in accordance with the long history of problematically low reliability of the surface dimension (Tait et al., 1998, p. 265). The subscale attained alpha values ranging from .24 to .33 in three different inventory contexts involving a total of 1,198 respondents. Furthermore, the

VALIDATION OF APPROACHES TO STUDYING INVENTORIES 257 Lack of Purpose subscale demonstrated ambiguous loading patterns within three independent samples. The subscale showed a tendency to emerge as a distinct factor positively connected to the surface dimension and negatively linked to the strategic dimension in both the full and shortened ASSIST contexts. In sum, however, the ASSIST (11 subscales) and the shortened version, Model C, displayed satisfactory psychometric properties, but further validation studies are needed to assess the robustness of these instruments more fully. Study 4. The ambition to compose a psychometrically more sophisticated version of the 18-item ASI was fairly successful. Although the three-factor design was not fully replicated due to a minor overlap between the deep and strategic factors, the improved version is considered a suitable instrument for mapping and researching students learning. However, measures should be taken to strengthen the Deep Approach scale,1 and further validation efforts in different and contrasting systems of higher education are warranted. In sum, Studies 1 and 2 broadened previous assessments of the ASI 18 and ASI 32 with supplementary information provided by confirmatory factor analysis. Studies 2 and 3 expanded upon issues related to the empirical integrity of the ASI and ASSIST subscales with additional information from explanatory factor analysis on the item level, and finally, testretest measures were provided for the full ASSIST.

Discussion and Concluding Remarks The ASI is a context-dependent and sensitive instrument, and the portability of different versions across contexts and cultures should not be taken for granted, as observed in several studies (see Andreou, Vlachos, & Andreou, 2006; Hayes, King, & Richardson, 1997; Richardson, 1994, 2000; Watkins & Regmi, 1996). The general premise is that versions of the inventory should be validated in each new context in which they are to be applied. Due to cross-cultural similarities between the Norwegian and UK contexts of higher education, one could assume that the psychometric properties of the ASI versions would comply, at large, with results previously obtained in the UK. The present studies seem to confirm this assumption. There are a number of interesting and closely interrelated aspects of learning related to how students approach their everyday study activities. Scale constructors may aspire to cover them all (Entwistle & McCune, 2004). The full versions of the ASI and ASSIST include a number of subscales (16 and 13, respectively) representing facets of learning, and attempting to reduce this complexity to three (ASSIST) or four (ASI) global and correlated dimensions is not an easy task. Now and again, it seems problematic to achieve a distinct and appropriate reproduction of the intended model by factor analyses, as has also been shown in previous studies of the ASI. Here some combination of explanatory and confirmatory factor analysis has proven to be a promising research strategy.

For instance, merely replacing the weakest item (item 4) with: I try to relate ideas I come across to other topics or practical situations whenever possible (RI); might strengthen the conceptual integrity of the deep approach scale.

258 PETTERSEN For instance, in Study 3, Monitoring Effectiveness was associated with the Deep Approach, incompatible with the theoretical model. Likewise, cross loaders impeded a distinctive reproduction of the Deep Approach in Study 4. Nonetheless, the examples are conceptually comprehensible and should not entirely be considered a major weakness (cf. Entwistle & McCune, 2004). The actual empirical overlap and interrelatedness may reflect a close conceptual link between the strategic and the deep dimensions. The students in the actual educational context may have interpreted some of the items related to a deep approach as the motivational aspects of the competencies needed to realize the intention to understand, in particular the meta-cognitive and self-regulating skills required to reach a deep understanding. That is, the so-called deep, strategic approach may turn out to be a typical learning pattern (cf. Entwistle, 2000; Lonka et al., 2004). Hence, the appropriateness of the ASI instruments derived from the constituent SAL model (cf. Table 1) should not be assessed exclusively by statistical considerations. Assessment must be grounded on theoretical and practical deliberations as well (Byrne, 2001), and issues regarding ecological validity should also be considered (Coffield et al., 2004). Efforts to shorten inventories may enforce an element of parsimony that inevitably will lead to limited coverage of the variations in student learning. Results obtained with the shorter versions of the ASI should therefore be regarded as indicators only, as a first layer of information that must be explored more in depth with elaborate quantitative as well as qualitative methods, including fine-grained analyses to advance the understanding of variations in everyday learning and studying. The alternative 18-item ASI seems to be well suited as an element to be administered with the CEQ as part of quality assurance and enhancement programs (cf. Diseth, 2007; Pettersen, 2007; Richardson, 2006; Richardson et al., 2007). The ASI results on both the scale and item levels can be used to raise students awareness of their learning approaches and as a precursor to reflection and dialogue between students and staff with regard to study strategies and approaches to learning and teaching quality. Thus, robust and psychometrically sound versions of the ASI can represent valuable tools not only in research on student learning, but in enhancing learning and teaching in higher education. References
Abrant Dahlgren, A. (2000). Portraits of PBL: Course objectives and students study strategies in computing engineering, psychology, and physiotherapy. Instructional Science, 28, 309329. Andreou, E., Vlachos, F., & Andreou, G. (2006). Approaches to studying among Greek university students: The impact of gender, age, academic discipline, and handedness. Educational Research, 48, 301311. Arbuckle, J.L. (1999). AMOS users guide, version 4.0. Chicago: SPSS Corporation. Backhaus, K., & Liff, J.P. (2007). Cognitive styles and approaches to studying in management education. Journal of Management Education, 31, 44566. Biggs, J. (1993). What do inventories of students learning processes really measure? A theoretical review and clarification. British Journal of Education Psychology, 63, 319. Biggs, J. (2001). Enhancing learning: A matter of style or approach. In R.J. Sternberg & L.F. Zang (Eds.), Perspectives on thinking, learning, and cognitive styles. Mahwah, NJ: Lawrence Erlbaum. Biggs, J., Kember, D., & Leung, D.Y.P. (2001). The Revised Two-factor Study Process Questionnaire: R-SPQ-2F. British Journal of Educational Psychology, 71, 133149. Blunch, N.J. (2008). Introduction to Structural Equation Modelling using SPSS and AMOS. Los Angeles, CA: SAGE.

VALIDATION OF APPROACHES TO STUDYING INVENTORIES 259


Boekaerts, M. (1999). Self-regulated learning: Where we are today. International Journal of Educational Research, 31, 445457. Byrne, B.M. (2001). Structural equation modelling with AMOS: Basic concepts, applications, and programming. London: Lawrence Erlbaum. Cattell, R.B. (1966). The Scree Test for the number of factors. Multivariate Behavioral Research, 1, 245276. Coffield, F., Moseley, D., Hall, E., & Ecclestone, K. (2004). Learning styles and pedagogy in post16 learning: A systematic and critical review. London: Learning and Skills Research Centre. Costello, A.B., & Osborne, J.W. (2005). Best practices in exploratory factor analysis: Four recommendations for getting the most from your analysis. Practical Assessment, Research & Evaluation, 10, 39. Retrieved September 24, 2008, from pareonline.net/pdf/v10n7.pdf Diseth, . (2001). Validation of a Norwegian Version of the Approaches and Study Skills Inventory for Students (ASSIST): Application of structural equation modeling. Scandinavian Journal of Educational Research, 45, 381394. Diseth, . (2007). Students evaluation of teaching, approaches to learning, and academic achievement. Scandinavian Journal of Educational Research, 51, 185204. Diseth, ., Pallesen, S., Hovland, A., & Larsen, S. (2006). Course experience, approaches to learning and academic achievement. Education & Training, 48, 156169. Duff, A. (1997). A note on the reliability and validity of a 30item version of Entwistle and Taits Revised Approaches to Studying Inventory. British Journal of Educational Psychology, 6, 529539. Duff, A. (2003). Quality of learning on a MBA program: The impact of approaches to learning on academic performance. Educational Psychology, 23, 123139. Duff, A. (2004). The Revised Approaches to Studying Inventory (RASI) and its use in management education. Active Learning in Higher Education, 5, 5672. Entwistle, N.J. (1981). Styles of learning and teaching: An integrated outline of educational psychology for students, teachers, and lecturers. Chichester: John Wiley. Entwistle, N.J. (2000). Approaches to studying and levels of understanding: The influences of teaching and assessment. In J.C. Smart (Ed.), Higher education: Handbook of theory and research (Vol. XV). New York: Agathon Press. Entwistle, N.J., & McCune, V. (2004). The conceptual base of study strategy inventories. Educational Psychology Review, 16, 325345. Entwistle, N.J., & Ramsden, P. (1983). Understanding student learning. London: Croom Helm. Entwistle, N.J., McCune, V., & Walker, P. (2001). Conceptions, styles, and approaches within higher education: Abstractions and everyday experience. In R. Sternberg & L-F. Zhang (Eds.), Perspectives on thinking, learning, and cognitive styles. Mahwah, NJ: Lawrence Erlbaum. Entwistle, N.J., Tait, H., & McCune, V. (2000). Patterns of response to an approaches of studying inventory across contrasting groups and contexts. European Journal of the Psychology of Education, 15, 3348. Gibbs, G. (1992). Improving the quality of student learning. Bristol: Technical and Educational Services. Gibbs, G., & Dunbar-Goddet, H. (2007). The effects of programme assessment environments on student learning. Oxford: Oxford Learning Institute. Gibbs, G., Habeshaw, S., & Habeshaw, T. (1988). 53 interesting ways to appraise your teaching. Bristol: Technical and Educational Services. Hair, J.F., Black, W.C., Babin, B.J., & Anderson, R.E. (2006). Multivariate data analysis (6th ed.). Upper Saddle River, NJ: Pearson Educational. Hayes, K., King, E., & Richardson, J.T.E. (1997). Mature students in higher education: III. Approaches to studying in Access students. Studies in Higher Education, 22, 1931. Kaiser, H.F. (1960). The application of electronic computers to factor analysis. Educational and Psychological Measurement, 59, 141151.

260 PETTERSEN
Kember, D., & Leung, D.Y.P. (1998). The dimensionality of approaches to learning: An investigation with confirmatory factor analysis on the structure of the SPQ and LPQ. British Journal of Educational Psychology, 68, 395407. Kreber, C. (2003). The relationship between students course perception and their approaches to studying in undergraduate science courses: A Canadian experience. Higher Education Research & Development, 22, 5775. Lawless, C.J., & Richardson, J.T.E. (2002). Approaches to studying and perceptions of academic quality in distance education. Higher Education, 44, 257282. Lizzio, A., Wilson, K., & Simons, R. (2002). University students perceptions of the learning environment and academic outcomes: Implications for theory and practice. Studies in Higher Education, 27, 2752. Lonka, K., Olkinoura, E., & Mkinen, J. (2004). Aspects and prospects of measuring study and learning in higher education. Educational Psychology Review, 16, 301323. Marton, F., & Slj, R. (2005). Approaches to learning. In F. Marton, D. Hounsell, & N.J. Entwistle (Eds.), The experience of learning (3rd ed.). Edinburgh: Scottish Academic Press. Mattick, K., Dennis, I., & Bligh, J. (2004). Approaches to learning and studying in medical students: Validation of a revised inventory and its relation to student characteristics and performance. Medical Education, 38, 535543. Meyer, J.H.F., & Parsons, P. (1989). Approaches to studying and course perceptions using the Lancaster Inventory: A comparative study. Studies in Higher Education, 14, 137153. Miller, C., & Parlett, M. (1974). Up to the mark: A study of the examination game. London: The Society for Research into Higher Education. Newstead, S. (1992). A study of two quick-and-easy methods of assessing individual differences in student learning. British Journal of Educational Psychology, 62, 299312. Osborne, J.W., & Costello, A.B. (2004). Sample size and subject to item ratio in principal components analysis. Practical Assessment, Research & Evaluation, 9. Retrieved June 4, 2008, from http://PAREonline.net/getvn.asp?v=9&n&equals;11 Pask, G. (1976). Styles and strategies of learning. British Journal of Educational Psychology, 46, 128148. Pettersen, R.C. (2004). Studenters lrings- og studiestrategier: Kvalitetsindikatorer i hgere utdanning? [Students learning and study strategies: Performance indicators in higher education? In Norwegian]. Uniped, 27(2), 4465. Pettersen, R.C. (2007). Studenters opplevelse og vurdering av undervisning og lringsmilj: Presentasjon av Course Experience Questionnaire og validering av tre norske versjoner, Erfaringer med studiet (EMS) [Students experience and evaluation of teaching and learning environments: Presentation of the Course Experience Questionnaire and validation of three Norwegian versions, (EMS); in Norwegian]. Halden, Norway: Hgskolen i stfold. Pintrich, P.R. (2004). A conceptual framework for assessing motivation and self-regulated learning in college students. Educational Psychology Review, 16, 385407. Reid, W.A., Duvall, E., & Evans, P. (2007). Relationship between assessment results and approaches to learning and studying in year-2 medical students. Medical Education, 41, 754762. Richardson, J.T.E. (1990). Reliability and replicability of the Approaches to Studying Questionnaire. Studies in Higher Education, 15, 155168. Richardson, J.T.E. (1992). A critical evaluation of a short form of the Approaches to Studying Inventory. Psychology Teaching Review, 1, 3445. Richardson, J.T.E. (1994). Cultural specificity of approaches to studying in higher education: A literature survey. Higher Education, 27, 449468. Richardson, J.T.E. (2000). Researching student learning. London: SRHE and Open University Press. Richardson, J.T.E. (2004). Methodological issues in questionnaire-based research on student learning in higher education. Educational Psychology Review, 16, 347358.

VALIDATION OF APPROACHES TO STUDYING INVENTORIES 261


Richardson, J.T.E. (2005). Students perceptions of academic quality and approaches to studying in distance education. British Educational Research Journal, 31, 727. Richardson, J.T.E. (2006). Investigating the relationship between variations in students perception of their academic environment and variations in study behavior in distance education. British Journal of Educational Psychology, 76, 867893. Richardson, J.T.E., Dawson, L., Sadlo, G., Jenkins, V., & McInnes, J. (2007). Perceived academic quality and approaches to studying in the health professions. Medical Teacher, 29 (web paper), e108e118. Richardson J.T.E., Gamborg, G., & Hammerberg, G. (2005). Perceived academic quality and approaches to studying at Danish schools of occupational therapy. Scandinavian Journal of Occupational Therapy, 12, 110117. Schumacker, R.E., & Lomax, R.G. (2004). A beginners guide to structural equation modeling (2nd ed.). Mahwah, NJ: Lawrence Erlbaum Associates. SPSS 16.0 (2007). Statistical Package for the Social Sciences (SPSS), Version 16. Chicago: SPSS Inc. Strmme, A. (1998). Undervisning som kan stimulere til konstruktivisme og dybdelring blant elever og studenter [Teaching that can stimulate constructivism and deep learning among students; in Norwegian]. Uniped, 21(1), 5568. Tait, H., Entwistle, N.J., & McCune, V. (1998). ASSIST: A reconceptualisation of the Approaches to Studying Inventory. In C. Rust (Ed.), Improving student learning: Improving students as learners. Oxford: Oxford Brookes University, Oxford Centre for Staff and Learning Development. Watkins, D., & Regmi, K. (1996). Towards the cross-cultural validation of a Western model of student approaches to learning. Journal of Cross-cultural Psychology, 27, 547560. Weinstein, C.E., Husman, J., & Dierking, D.R. (2000). Self-regulation interventions with a focus on learning strategies. In M. Boekaerts, P.R. Pintrich, & M. Zeider (Eds.), Handbook of selfregulation. San Diego, CA: Academic Press. Wilson, K., Lizzio, A., & Ramsden, P. (1997). The development, validation and application of the Course Experience Questionnaire. Studies in Higher Education, 22, 3353.

Copyright of Scandinavian Journal of Educational Research is the property of Routledge and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.

Vous aimerez peut-être aussi