Vous êtes sur la page 1sur 13

This article was downloaded by: [82.137.8.

69] On: 27 February 2013, At: 12:03 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Open Learning: The Journal of Open, Distance and e-Learning


Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/copl20

Assessment, feedback and marking guides in distance education


Frances Chetwynd & Chris Dobbyn
a a a

Faculty of Mathematics, Computing and Technology, The Open University, UK Version of record first published: 15 Jan 2011.

To cite this article: Frances Chetwynd & Chris Dobbyn (2011): Assessment, feedback and marking guides in distance education, Open Learning: The Journal of Open, Distance and e-Learning, 26:1, 67-78 To link to this article: http://dx.doi.org/10.1080/02680513.2011.538565

PLEASE SCROLL DOWN FOR ARTICLE Full terms and conditions of use: http://www.tandfonline.com/page/terms-andconditions This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.

Open Learning Vol. 26, No. 1, February 2011, 6778

Assessment, feedback and marking guides in distance education


Frances Chetwynd and Chris Dobbyn*
Faculty of Mathematics, Computing and Technology, The Open University, UK
Open 10.1080/02680513.2011.538565 COPL_A_538565.sgm 0268-0513 Original Taylor 2011 0 1 26 f.d.chetwynd@open.ac.uk FrancesChetwynd 00000February Learning & and Article Francis (print)/1469-9958 Francis 2011 (online)

Downloaded by [82.137.8.69] at 12:03 27 February 2013

In higher education (HE), effective feedback on student assessments plays a vital role in retention and in the development of self-regulating learners, particularly in the first year. In distance learning, where large population modules are common, assignment feedback is generally supported by standard marking guides, issued to the numerous tutors responsible for assessing student work. In this paper, we develop a taxonomy of feedback and report on the results of a survey of tutor attitudes to, and strategies for, providing feedback on a very large Level 1 Open University module. We analyse the extent to which the marking guides afford adequate support for truly effective feedback, and make a number of recommendations for reworking assessment regimes and marking guides. Keywords: assessment; feedback; retention; marking guides

Introduction In the literature relating to student learning in higher education (HE), there is general agreement on the crucial role of assessment, and the feedback students are given on it. Moreover, it has been argued (Nicol, 2008; Race, 2009; Yorke & Longden, 2008) that it is at Level 1 (first year) at university that effective feedback is most necessary for motivation, empowerment, retention and the development of self-critical learning skills. Effective feedback on assessment is nowhere more important than in distance education courses, where comments on assignments may be the principal, or even the only, learning communication between tutor and student (Simpson, 2002). The Open University (OU), with 40 years experience and over two million graduates, is one of the worlds largest providers of distance education. In the universitys early years, students typically progressed in mix and match style, assembling sometimes at a leisurely pace a patchwork of modules that interested them into a general degree. In such circumstances, where the focus was more on individual modules than the final qualification, retention was seldom considered a problem. However, the university now attracts many more young students seeking marketable degrees than in the past, and the universitys provision has recently moved decisively towards named degrees with prescribed pathways. Along these pathways, knowledge and skills must be built incrementally, the benchmarks of professional bodies satisfied, and students motivation sustained over many years of study. Retention has thus become a major issue for the university, and nowhere is the need to motivate and retain students greater than at Level 1. Currently, only four in 10 new OU undergraduates at Level 1 progress to take another module in their second year (Swann, 2009).
*Corresponding author. Email: c.h.dobbyn@open.ac.uk
ISSN 0268-0513 print/ISSN 1469-9958 online 2011 The Open University DOI: 10.1080/02680513.2011.538565 http://www.informaworld.com

68

F. Chetwynd and C. Dobbyn

Downloaded by [82.137.8.69] at 12:03 27 February 2013

This poor record of retention suggests that, among other factors, the feedback currently being offered to Level 1 students may be of doubtful consistency and efficacy. Feedback on OU students assignment work is generally provided by personal tutors who are based in a students own region. This paper presents and discusses the results of a survey of tutors working on a large Level 1 OU module: their conceptions of feedback, their methods of preparing it and their views on the value of the information and support they receive from the small team of centrally based academics responsible for this module. There have been numerous surveys of the types of feedback provided to university students (Crisp, 2007; Mutch, 2003; Weaver, 2006), and of student attitudes to feedback (Jelfs, Richardson & Price, 2009). There has been little work so far on the views of tutors (Hyland, 2001) on the problems of providing feedback in large, distance learning courses. The survey results indicate, we believe, a number of problems with the feedback that is currently provided, and with the support made available to tutors on this particular module. These may be indicative of shortcomings across the universitys Level 1 provision generally. We discuss these problems and make a series of recommendations that we hope would be of interest to distance educators beyond the OU, and to HE teachers generally. Motivation and retention through feedback Educationalists have long accepted that timely and effective feedback on assessment is essential for learning. Studies have indicated that students do value feedback (Weaver, 2006), but that many students fail to act upon it (Crisp, 2007; Weaver, 2006). Several possible reasons have been suggested for this: a failure to understand the discourse of the discipline or academic language generally (Hounsell, 2007; Lea & Stierer, 2000; Lea & Street, 1998), insufficient experience in deconstructing feedback comments (Hyland, 2001), an inappropriate understanding of the nature of learning (Gibbs & Simpson, 2004) and, perhaps commonly, an inability to apply feedback on a current assignment successfully to future work. In summary, students often do not find tutor feedback usable (Walker, 2009). Considerable amounts of work have gone into defining feedback (Sadler, 1989), developing possible feedback taxonomies (Brown & Glover, 2006), and characterising the features required of feedback for it to be effective (Nicol, 2008; Nicol & Macfarlane-Dick, 2004). Sadler sees altering the gap between a students actual performance and a reference level as central to the concept. However, Nicol (2008) points out that this sparse conception of a reference level may lead to an ideal minus model of feedback, in which comments simply serve to point out where a student has fallen short. A threshold plus model, in which students are praised for the extent to which they have exceeded a basic standard, is likely to be more motivating. As Walker (2009) argues, feedback may either be targeted at the gap between the students performance and the ideal in a particular assignment (retrospective feedback), or relate to more generic themes, which would be applicable to future work (future-altering feedback). In Brown and Glovers (2006) classification, comments may refer either to the content of the students work or to more general skills. This, taken together with Walkers (2009) distinction, suggests a four-category taxonomy of feedback: retrospective-on-content, future-altering-on-content, retrospective-onskills and future-altering-on-skills, as illustrated in Figure 1.

Open Learning

69

Downloaded by [82.137.8.69] at 12:03 27 February 2013

Figure 1.
Figure 1. A possible taxonomy of feedback.

A possible taxonomy of feedback.

The characteristics necessary for feedback to be effective have been extensively discussed, notably by Nicol and Macfarlane-Dick in their seven principles of good feedback practice (Nicol & Macfarlane-Dick, 2004, 2006). These principles based on a cognitive model of student self-regulation are to: (1) (2) (3) (4) (5) (6) help clarify what good performance is (goals, criteria, expected standards); facilitate the development of self-assessment (reflection) in learning; deliver high quality information to students about their learning; encourage teacher and peer dialogue around learning; encourage positive motivational beliefs and self-esteem; provide opportunities to close the gap between current and desired performance; and (7) provide information to teachers that can be used to help shape teaching. Nicol (2007) has expanded this list of principles to take account of the obvious relationship between assessment design and feedback, a point we shall take up later. In Gibbs and Simpsons analysis of five groups of conditions under which assessment supports learning (Gibbs & Simpson, 2004), the final three quantity and timing, quality and student response all relate to feedback. Making assessment feedback effective is a responsibility shared by the student, tutor and institution (Rae & Cochrane, 2008). Effective feedback based on principles such as the above, and relating to all the four feedback categories suggested earlier, should reinforce the formative aspects of assignments. Students who receive, absorb and use future-altering comments should develop the academic skills for selfregulated learning. Unfortunately, it is these future-altering comments that tutors often find hardest to write (Hounsell, 2007), that students find hardest to interpret (Boud, 2000) and, as we shall argue, OU assessment design too often fails to address. Feedback in distance learning and at the OU Nowhere is a capacity for independent, self-critical, self-regulating learning more necessary than in distance education. Effective feedback is an essential part of the process of building such learners, and in distance education this is frequently more

70

F. Chetwynd and C. Dobbyn

difficult to provide than in conventional face-to-face settings. In distance education, most feedback may come in the form of written comments on assignment work. Assignment work plays a major part in students learning experience at the OU: on any one module, students will typically submit three to six tutor marked assignments (TMAs) depending on the modules size set by a central course team. These are marked by local tutors who supply feedback. To ensure consistency and fairness, especially on very large Level 1 modules, tutors marking and feedback are based on marking guides (MGs) that are, along with the assignments, centrally produced. The MG, which has featured in OU assessment policy since the universitys first courses emerged in 1971, is invariably a long document, outlining the expected content of answers, along with very detailed advice on how marks should be awarded, often down to the granularity of a single mark. Tutors receive initial training in how to give feedback and samples of their subsequent work are regularly monitored. The problem of very large Level 1 modules is, of course, found throughout HE worldwide, and many institutions have evolved similar methods to the OU of coping with such large numbers. The module on which this research is based, T175: Networked living: exploring information and communication technologies, is a Level 1 module acceptable (and in many cases compulsory) for a number of named degrees. Up to 2400 students take the module every year, supported by around 140 tutors. Students are required to submit four TMAs electronically, which are designed to be both summative (they are graded and the marks count towards the students final result) and formative (they are intended to help students develop key knowledge and skills). Tutors are required to enter marks and comments on a standardised summary sheet but usually also add comments on the students script and may return additional feedback documents. Feedback must be returned to the student within two weeks. From the point of view of delivering effective feedback, this arrangement can be problematic. Since the TMAs are relatively infrequent and count substantially towards a students final result, they are classic examples of what in the literature is called high stakes assessment (Harlen & Crick, 2003; Knight, 2002). Moreover, no explicit checks are carried out on efficacy of feedback: beyond the evidence of a students performance in subsequent assessments, there is no means of gauging whether a student has understood, interpreted, acted on or even read the feedback supplied. In theory, students can contact their tutors for clarification and discussion but in practice they rarely do for a number of pragmatic reasons. Probably foremost among these is the pressure to move on to the next stage of the module, and to study for the next assignment, a characteristic of high-stakes assessments (Price, Carroll, ODonovan, & Rust, 2010). By that stage, retrospective feedback as defined in our suggested taxonomy may no longer be particularly helpful. However, it can be also argued that Level 1 OU students are more likely to misunderstand the supportive relationship that should exist between HE students and their tutors than entrants to other UK universities. The OU has no formal entrance requirements, so many of its new students may well have had negative school experiences or failed already in traditional HE settings. Many others will be new to HE, and the OUs particular style of tutor-supported distance education will be foreign to the majority. Since formative and summative assessment through TMAs is the major source of feedback to OU students, the MGs should be a crucial aid to tutors in providing it. Tutors use them as the principal source of support for the feedback they provide, and central staff use them as a key means of communicating their pedagogical intentions to the tutors. Clearly then, the ways in which tutors draw on the MGs, and their useful-

Downloaded by [82.137.8.69] at 12:03 27 February 2013

Open Learning

71

ness in stimulating effective feedback in all four sectors of our suggested taxonomy, is of critical importance. Our survey was set up to probe these issues. The survey The survey was set up on SurveyMonkey, with a link to it distributed via email. It was made available online for three weeks, with a reminder sent after 10 days considered to be the optimum time span (Sue & Ritter, 2007). Survey questions were designed to elicit details of the methods employed by tutors in their marking (through multiple choice questions) along with other views (in free text answers). The wording of the questions and the answer choices offered were determined on the basis of:

Downloaded by [82.137.8.69] at 12:03 27 February 2013

preliminary telephone interviews (with one new, and two more experienced, tutors); a thematic analysis of online tutor forums (over 10,000 messages posted since 2005); and feedback from test runs by two current tutors and one former tutor.

The 41 questions covered the following areas of interest:


OU tutoring background and experience of the respondent; the relative importance to the respondent of Nicol and Macfarlane-Dicks seven principles; methods employed to present feedback; and views on current MGs and suggestions for future.

The survey was distributed to the 140 current T175 tutors, of whom 70 responded. Results The survey revealed that tutors bring a wealth of educational experience to the module. Specifically:

91% had been an OU tutor for five years or more; 29% with more than 10 years experience; and over 85% had worked on four or more presentations of T175.

MGs are made available to tutors a minimum of four weeks before the cut-off date for each assignment. We were interested to assess the use tutors made of the document as a source of formative tuition in the weeks running up to submission. We found that:

88% download the MG within the two weeks prior to the TMA submission date; and 64% read the document one week or less before they start marking.

Tutors reported that their feedback could be delivered using a number of methods:

All bar one tutor reported that they provide retrospective-on-content comments, either on the script, or on both the script and the compulsory summary sheet;

72

F. Chetwynd and C. Dobbyn

54% reported that they supply comments on the script, the summary sheet, and an additional document; Of the tutors supplying an additional document, 60% use a standardised template that they fill in for each student; Others use a variety of methods, including drawing on a bank of comments from previous presentations and/or from a bank built up during earlier marking of each TMA.

When asked about the perceived efficacy of their own methods of providing feedback comments, tutors had certain confessions to make:

35% admitted to duplication, or even triplication, of the same feedback to the same student; 17% admitted that the marks information in their comments duplicated the information they are required to enter on the summary sheet.

Downloaded by [82.137.8.69] at 12:03 27 February 2013

And on the question of their methods of offering future-altering feedback:

71% of respondents believed that their on-script comments were the most effective future-altering aspect; 60% stated that the primary aim of their comments on the summary sheet was to improve the grade on the next TMA; Surprisingly, only 6% thought that their homemade additional document (often a template) was the feedback component most likely to help the student improve next time.

A number of questions aimed to investigate how tutors viewed their feedback in the light of Nicol and Macfarlane-Dicks seven principles of effective feedback. Presented with this list of principles and asked to rate them in the order which: (a) most closely matched the tutors own aspirations, and (b) the MGs helped them to fulfil:

31% chose the principle deliver high-quality information to students about their learning (e.g. strengths and weaknesses) for (a); 63% responded with clarify to students what good performance is for (b).

The full set of choices is presented in the two pie-charts in Figure 2. Finally, tutors were asked to choose from a list of possible additional features they would like to see in the MGs. Responses were:
Figure 2. The purpose of feedback tutor aspirations and the MGs.

Over 57% requested model answers for calculations, with model answers for prose questions coming a close second; More than 30% of respondents requested Suggestions for comments to include on the summary sheet.

And asked what the most pressing marking problems there were that MGs did not adequately address:

Over 65% of respondents chose Helping students who lack written English skills; A further 45+% chose both Helping students who lack numeracy skills and Helping students whose first language is not English.

We now discuss some of the implications of these results.

Open Learning

73

Figure 2.

The purpose of feedback tutor aspirations and the MGs.

Downloaded by [82.137.8.69] at 12:03 27 February 2013

Discussion We would argue that the survey responses raised a number of issues.

Tutor experience It is clear from the results quoted above that tutors bring a wealth of experience to T175 of technology teaching of Level 1 students in particular, and of distance education in general. Tutor activity should provide for a rich exchange of knowledge and ideas within and between the academic and student communities associated with the module. This potential was evident in the analysis of the forum messages, which presented numerous lively discussions. Given the depth and diversity of tutors experience and background, the module should be a repository of good practice. However, as tutors are not co-located, formal mechanisms are essential for propagating ideas for good practice arising in the forum. No such mechanisms exist, and tutors are under no obligation to disseminate their ideas. Sadly too, discussion too often tends to focus on such pragmatic issues as the precise awarding of marks. Only about half the tutors use the forum at all, and at the end of each presentation of the module, it is archived and the evidence suggests forgotten. Thus the assessment feedback submitted to students must largely depend on styles derived from the tutors own backgrounds and from their initial training, as well as from the guidance presented in the MGs. Tutors use of templates, for example, may be the result of many of them having tutored on precursor courses in which templates were required. Tutors are required to undergo staff development training, but this need not be related in any way to feedback and marking.

Legacy issues The practice of issuing MGs dates from the OUs earliest years (the early 1970s). At that time, marking used to be what it says on the tin: marking. Indeed, the term marking guide itself suggests these documents intentions: the awarding of marks for specified content; that is to say a style of feedback that is generally retrospective. However, theories of student learning have moved on, particularly in the last 10

74

F. Chetwynd and C. Dobbyn

years, and there is now a huge theoretical literature on feedback. Naturally, Level 1 course teams now recognise the importance of future-altering feedback but, to some extent, earlier attitudes persist, and the recognition has generally not found its way into the MGs or the assessment regimes they support. Altogether, MG structure and content are plagued by legacy issues and enshrine an outdated conception of feedback. To lend support to this claim, we conducted an analysis of the feedback advice offered in a single, randomly selected T175 MG, based on the four-quadrant taxonomy presented above. Classifying the feedback and marking advice contained in it into one or other of the four categories revealed the breakdown indicated in Figure 3. In general, the advice offered in each of the four categories could be summarised as follows:
Figure 3. Breakdown of feedback advice in a single T175 Marking Guide.

Downloaded by [82.137.8.69] at 12:03 27 February 2013

RC (Retrospective-on-content). This mainly comprised checklists of expected answers, principally for mark allocation, which may have excluded other possible or arguable answers. This approach strongly implies an ideal minus marking strategy: tutors would be encouraged merely to point out items from the checklist missing in students answers. It would probably be of little help for correcting misapprehensions, clarifying concepts, etc. RS (Retrospective-on-skills). For questions that required students to apply certain general skills arithmetical computation, for instance generally all that was offered were model answers or a reference back to the course texts. Underlying concepts were implied only, and no specific help on corrections or typical errors was offered. FAC (Future-altering-on-content). These were notable by their absence. We felt that this was an opportunity missed, as many model answers could have been rewritten as future-altering, clarifying feedback. FAC (Future-altering-on-skills). Tutors were often required to award marks for generic skills, such as good grammar or a well-structured argument, but the judgement on the phrasing of advice on such issues was usually left up to them.

Figure 3.

Breakdown of feedback advice in a single T175 marking guide.

Open Learning

75

Downloaded by [82.137.8.69] at 12:03 27 February 2013

Practical issues The greatest pressure on tutors, course teams and students alike, throughout HE, is time. At the OU, course teams are preoccupied with preparing assignment material and implementing small changes to their modules on overlapping presentations, with little or no feedback from current presentations. Tutors experience high workloads and fixed time frames and are only contracted to work for predetermined amounts of time. Understandably, students main preoccupation is to progress through the course, and they will forever be looking towards the next assignment rather than reflecting on the last. Naturally, under such pressures, all parties are tempted to take shortcuts. For tutors, the use of feedback templates may be just such a shortcut. However, the use of a standardised template may encourage tutors to provide standardised comments, and weaker students may view the template as simply a long litany of negative comments, as the tutor fills in each available space on a grid with a criticism. Indeed, some tutors did comment that their templates are frequently structured to remark on expected content and to justify the part mark achieved, an obvious case of ideal minus feedback. Moreover, the inclusion of an additional document along with the summary sheet and on-script comments may simply confuse students as to where to find useful feedback. Paradoxically, given the pressures of time, the survey responses reveal that tutors are offering each student a mass of advice often the same advice in a number of places. Students are likely to find so much feedback, in too many places, confusing and demotivating and may simply become bored or may just assume that all on-script comments are repeated elsewhere, hence missing valuable advice. This plethora of tutor comments may arise from, or cause, a confusion of aims, and there was some recognition of this possibility in tutors comments. However, our analysis of the four typical T175 MGs revealed that they offered little or no help to tutors on how feedback might be personalised, or where, and in what form, it might be presented. The survey suggests that although tutors do see providing future-altering feedback as an important aim, they are not clear about where it is best placed. At the same time, and rather oddly, a large majority do not consider their home-made feedback templates as a useful device with which to offer such feedback. Tutor perceptions of marking guides The survey revealed a manifest clash between tutors aspirations for their feedback and the reality of the support MGs provide for them. A large majority (63%) saw the MGs as simply providing information that would enable them to comment on students current performance mainly retrospectively on the content of the current assignment. Although opinions were more evenly divided as to tutors own ideal goals for feedback, nearly half (49%) chose the more future-altering principles of delivering high-quality information to students about their learning, and developing their ability to reflect and assess. Tutors own emphasis seemed to be firmly on future learning rather than current content. Furthermore, it is fairly clear that tutors do not view the MG as a source of information for future-altering feedback during the pre-assignment study phase, but as a short-term tool for retrospective assessment of the TMA in hand. A substantial majority do not even read it until a week before assignments are finally due, and the concentration on marks and content alone, evidenced by the analysis above, surely encourages feedback that is almost exclusively retrospective-oncontent, in our suggested taxonomy. Altogether, in the MGs the course team appear

76

F. Chetwynd and C. Dobbyn

to be losing an opportunity to communicate more general, future-altering guidance on the formative aspects of the TMA, and of the whole module. Model answers and feedback Overall, then, there are obvious deficiencies in the MGs for T175, when considered as a tool for offering constructive, future-altering feedback, and thus by extension, helping to improve motivation and ultimately retention. But tutors own suggestions for improved support were problematic also. A majority (57%) asked for model answers. However, it is far from clear how useful model answers might be as a support for future-altering advice: without additional clarification they must surely represent purely retrospective feedback (Huxham, 2007). Many other tutors asked for help over improving basic skills in English language and numeracy. Certainly, deficient language skills appears to be a very particular problem in OU Level 1 modules: Swann (2009) confirms that one in 10 new Level 1 OU students do not have English as a first language. However, correcting grammar and language without extensive additional teaching and support may only amount to yet more retrospective feedback. Defective numeracy skills may perhaps be more easily remedied. Recommendations In the literature, Nicols principles are sometimes treated as if they applied simply to feedback in isolation. But since feedback generally exists within the context of assessment, the seven principles we quoted above apply as much to assessments and assessment regimes themselves as they do to the activities of the tutors and the MGs that support them. The structure and content of assessments should make effective feedback possible: then the task of making MGs more useful foundations for feedback can follow. Learning outcomes, assessment design, MGs and tutor methods are interdependent. Assessment regimes are difficult to change, especially if they have deep historical roots and are bound by inflexible institutional rules. However, we would make the following recommendations for the short term: (1) A new design for TMA mark allocation. The assessment guidelines for the majority of individual questions on current TMAs require tutors to comment on, and allocate a certain number of marks for, students display of general skills, such as structure, grammar, numerical skills, etc. This should be replaced by an instruction to award an overall comment and mark for skills. We anticipate this would focus both tutors and students minds on generic skills, independently of the content of specific questions, potentially with future-altering consequences. (2) TMAs should incorporate specific guidelines to both tutors and students as to the fact that skills are being assessed as well as content, along with clear indications as to which skills are to be covered on each question. This policy is followed in many OU modern languages courses: there, skills criteria are laid out in the assignments themselves on a grid, and feedback presented on this grid (Chetwynd & Dobbyn, 2010). Again, the aim is to shift tutors and students focus from purely retrospective-on-content concerns. (3) Overall, the assessment design for a module should embody clear lines of progression from assignment to assignment, a situation that is certainly not

Downloaded by [82.137.8.69] at 12:03 27 February 2013

Open Learning

77

Downloaded by [82.137.8.69] at 12:03 27 February 2013

currently the case. Assignments and their MGs should explicitly point forward to future work, stating and justifying the skills that are to be developed across the whole module. Assignments and their MGs should also refer back to those skills developed and tested in earlier TMAs, thus encouraging students to (re)read, and perhaps even make use of, the feedback they had received earlier. (4) MGs should be structured along the lines of the four quadrant taxonomy suggested here, with a requirement for tutors to offer advice in all four areas, wherever a question, or a students response, makes this appropriate. This would encourage tutors to draw a firm distinction between retrospective and future-altering advice, both in the wording of such advice and its placement. Retrospective feedback would be best placed on-script, and future-altering feedback on the compulsory summary sheet. Such a change would discourage the use of home-made templates, which few tutors appear to believe are particularly useful. (5) MGs should be used to support tutors in providing future-altering feedback, especially future-altering-on-skills comments. A long-running course such as T175 is a storehouse of knowledge about typical skills deficiencies and the commonest errors. It would be perfectly feasible for MGs to include futurealtering advice to tutors, in the form of indications of characteristic errors, common conceptual misunderstandings and the most frequently encountered skills deficits, with guidance on appropriate feedback for each. The task of tailoring this generic guidance to the individual needs of the student can and should be left to a tutors professional judgement. (6) Deficiencies in written English skills are a major concern with which technology tutors in particular are not necessarily trained to deal. We propose that course teams should issue generic advice at the start of the course on how students may be helped with written English, and refer back to this advice in subsequent MGs. This would include information on specialist resources and teams available throughout the university. In the longer term, the current high-stakes assessment regimes entrenched throughout the university will have to change radically. The imposition of some universal, ideal regime of assessment would certainly be misguided, but at Level 1 at least, we believe that assessments should generally be shorter, more frequent, more firmly linked to one another, lower-stakes and more evenly spaced across the duration of modules. The difficulty of bringing about such major institutional changes should not be underestimated, but they will be necessary in a future in which motivation and retention will become ever more vital. References
Boud, D. (2000). Sustainable assessment: Rethinking assessment for the learning society. Studies in Continuing Education, 22(2), 151167. Brown, E., & Glover, C. (2006). Evaluating written feedback. In C. Bryan & K. Clegg (Eds.), Innovative assessment in higher education (pp. 8191). Abingdon: Routledge. Chetwynd, F.D., & Dobbyn, C.H. (2010, October). Marking guides and feedback written and oral. Paper presented at the Annual Conference of the International Society for the Scholarship of Teaching and Learning (ISSOTL 2010), Liverpool, UK. Crisp, B. (2007). Is it worth the effort? How feedback influences students subsequent submission of assessable work. Assessment and Evaluation in Higher Education, 32(5), 571581.

78

F. Chetwynd and C. Dobbyn

Gibbs, G., & Simpson, C. (2004). Conditions under which assessment supports students learning. Learning and Teaching in Higher Education, Issue 1, 331. Harlen, W., & Crick, R.D. (2003). Testing and motivation for learning. Assessment in Education, 10(2), 169207. Hounsell, D. (2007). Towards more sustainable feedback. In D. Boud & N. Falchikov (Eds.), Rethinking assessment in higher education (pp. 101113). Abingdon: Routledge. Huxham, M. (2007). Fast and effective feedback: Are model answers the answer? Assessment and Evaluation in Higher Education, 32(6), 601611. Hyland, F. (2001). Providing effective support: Investigating feedback to distance language learners. Open Learning, 16(3), 233247. Jelfs, A., Richardson, J.T.E., & Price, L. (2009). Student and tutor perceptions of effective tutoring in distance education. Distance Education, 30(3), 419441. Knight, P. (2002). Summative assessment in higher education: Practices in disarray. Studies in Higher Education, 27(3), 275286. Lea, M.R., & Stierer, B.V. (Eds.) (2000). Student writing in higher education, new contexts. Maidenhead: Open University Press. Lea, M.R., & Street, B.V. (1998). Student writing in higher education: An academic literacies approach. Studies in Higher Education, 23(2), 157172. Mutch, A. (2003). Exploring the practice of feedback to students. Active Learning in Higher Education, 4(1) 2438. Nicol, D. (2007). Principles of good assessment and feedback: Theory and practice. Retrieved 31 October 2009 from http://www.reap.ac.uk/resources.html. Nicol, D. (2008). Transforming assessment and feedback: Enhancing integration and empowerment in the first year: Quality Assurance Agency. Retrieved December 4, 2010, from http://www.enhancementthemes.ac.uk/documents/firstyear/First_Year_Transforming_ Assess.pdf Nicol, D.J., & Macfarlane-Dick, D. (2004). Rethinking formative assessment in HE: A theoretical model and seven principles of good feedback practice. Retrieved October 31, 2009, from http://www.heacademy.ac.uk/ourwork/learning/assessment/senlef/principles. Nicol, D., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199218. Price, M., Carroll, J., ODonovan, B., & Rust, C. (2010). If I was going there I wouldnt start from here: A critical commentary on current assessment practice. Assessment and Evaluation in Higher Education (iFirst). Race, P. (2009). Designing assessment to improve physical sciences learning. Hull: Higher Education Academy. Rae, A., & Cochrane, D. (2008). Listening to students: How to make written assessment feedback useful. Active Learning in Higher Education, 9(3), 217230. Sadler, D.R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18(1), 19144. Simpson, O. (2002). Supporting students in online, open and distance learning. (2nd ed). London: Kogan Page. Sue, V., & Ritter, L. (2007). Conducting online surveys. London: Sage. Swann, W. (2009). Personal email August 25 (2009). Request to cite: E-mail to f.d.chetwynd @open.ac.uk. Walker, M. (2009). An investigation into written comments on assignments: Do students find them usable? Assessment & Evaluation in Higher Education, 34(1), 6778. Weaver, R. (2006). Do students value feedback? Student perceptions of tutors written comments. Assessment and Evaluation in Higher Education, 31(3), 379394. Yorke, M., & Longden, B. (2008). The first year experience of higher education in the UK (Phase 2). Retrieved August 31, 2009 from http://www.heacademy.ac.uk/news/detail/ fye_final_report.

Downloaded by [82.137.8.69] at 12:03 27 February 2013

Vous aimerez peut-être aussi