Vous êtes sur la page 1sur 8

Three Types of Direct Writing in the SPM Examination: The Need for Balancing Classroom and Standardised

Writing Tests
Normah binti Othmqn

Abstract: Three types of direct writing are tested in the English

Language Papers for the SPM Examination in Malaysia. The three types of direct writing are guided writing, summary writing and

continuous writing. Each of these three types of direct writing assesses different writing skills. Assessment of these writing skills is done by ESL teachers to gain some insight into their students' writing progress. It is important for ESL teachers to give specific feedback so students can improve writing skills and prepare for the high stakes SPM Examination that will determine their future

academic career. Most important is using a balance between classroom and standardised writing assessment of students' writing.

Keyrvords: Classroom assessment, direct writing



performance-based assessment, standardised testing, ESL teachers'

English Language is one of the compulsory subjects in the SPM examination. This subject is assessed through two written papers: English Language Paper 1 (1119i1) and English Language Paper2 (lll9l2). The oral skills in this subject are conducted as schoolbased assessment. Paper I (l119/1) consists of a subjective written test. There is one question on directed writing (directed writing is named as guided writing in this paper to avoid confusion with the term "direct writing") given in Section A and"five questions on continuous writing in Section B. The candidates are required to answer the compulsory question in Section A and to choose one question from Section B. Paper 2 (lll9/2) consists of objective and subjective written test. There are thirty-four (34) questions given in

Iirrglish Languagc Jor-rrnal

'l'ypcs ol' [)ircct Writing,

literary text. The candidates are required to answer three (3)

questions each, one on a poem, a short story and a novel.

information transfer. There are five (5) questions on reading comprehension and one (r) question on summary writing in Section c. Section D in Paper 2 (lll9l2) consists of questions based on a

four sections in this paper. The candidates are required to answer all questions. Section A consists of fifteen multiple-choice questions based on stimuli. Section B consists of ten (10) questions based on

The public accepted the validity of direct writing assessment as compared to indirect writing assessment a long time ago. According to Hamp-Lyons (1990), in the 1970s or so, many people

in North America and to a lesser extent in Great Britain


Australia believed that writing could be validly tested by an indirect test of writing. In the 1990s, the belief was not only defeated but chased from the battlefield because ofsocial pressure from schools, colleges, and parents who argued that failure to learn and practice writing of reasonable lengths in school was leading to declining literacy levels and to a college-entry population that could not think


The focus of this paper is the three types of direct writing given in Paper I (1119/l) and Paper 2 (|rgr2). The three rypes of direct writings are guided writing in Section A and continuous writing in Section B, both in Paper I (l l l9ll) and summary writing in Section c of Paper 2 (1ll9l2). The three types of direct writing components in the SPM examination are very important because they deiermine

critically about intellectual ideas and academic material.


ESL teachers' assessment of students'writing can greatly influence students' attitudes for future learning because students can be easily confused by unclear, vague or ambiguous teachers' feedback and can become frustrated with their writing progress and hopes for the

students' language performance. As stated by Takala (19gS) "written language has always played a dominant role in formal education. Typically the acquisition of literacy is considered to be one of the most important tasks of the school, not only as a vehicle of leaming, but as a means of achieving other goals as well." Direct writing requires students to write in reasonable lengths so that teachers are able to assess their language performance. This assessment is different from indirect writing like the multiplechoice questions, in which students are not required to write as in direct writing. The three types of direct writing in the English Language Papers for the spM examination require students to write differently. These three different types of writings test students' ability to understand and use correct grammar, to apply language skills for interpersonal purposes, to apply language skills for informational purposes, and to apply language skills for aesthetic purposes (Malaysian Board of Examination,2004). Students need to do well in the three types of direct writing performance to be able to get good grades in the English Language papers in the SpM


examination. Altematively, students can be positively motivated if the assessment given to their written work reflects their actual performance in the national level examination. Unfortunately, there is no clear set of universal guidelines that will guarantee such a supportive and positive experience for all students. "In a given context for writing instruction, students will differ, and tasks, topics, and responses will differ" (Grabe and

in their

Kaplan, 1996:377). Schoolteachers may be using different ways and methods to assess their students' writing tasks, depending on the school authorities, instruction given to them. Students will not be able to predict their actual performance in the national examination if the performance assessment method adopted by their ESL teachers is not the same as the national raters' performance assessment method. Normally, secondary school ESL teachers in Malaysia invite the national raters who are considered as the expert raters to come to their schools to conduct seminars and workshops for their


litrglish l-anguagc .ltlurnal students before they sit for the nationar examination. This practice

Types ol'Direct Writing


are normally given training to assess the English Language papers in the SpM examination yearry. The Maraysian Board of Examination trains these teachers to assess the papers. The function of the crassroom-based assessment in Malaysian secondary schools is to give stuclents some insight into the prog."r, of their leaming and achievement while still in school, wherea-s tt national-based examinations are to give finar " grades that determine the students' future studies in higher institutions of learning. ESL teachers who conduct classroom-based assessments of writing provide immediate feedback for the students, and thus enabre students to progress in their learning process, and in their preparation for the national-based examination. The nation-based assessment of writing provides grades that determine the students, future undertakings in their further studies. Expert raters conduct the national-based assessment, and the Malaysian Examination Board trains the expert raters to assess the national examination. It is important then for ESL teachers to balance their classroom-based assessments with the national-based assessment of writing for the

raters' expectation when assessing their writing product in the national level examination. A few selected ESL teachers assess the English Language papers in the SpM examination. This serected group of ESL teachers are the professional cadres who

is to ensure that their students get some exposure about the national

national raters and schoolteachers who provide


performance reports based on assessment of students' writing givc a fair report that really depicts the students' actual performance, the

reason is that the report given determines the students' future undertakings and even future careers. So it is necessary to study validated classroom-based assessment methods that ESL teachers
can make use of to balance classroom-based assessments with the nation-based assessment.

Apart from giving grades to students' writing, ESL teachers

need to give feedback about their students' performance, in order to

let them irnprove on their learning. The Director-General of

Education, Tan Sri Datuk Abdul Rafie Mahat, when announcing the decision to move towards school-based oral assessment, stressed that teachers will have to give more specific feedback regarding each student's ability. This way students will know how to improve. The ministry felt that this approach would improve teaching and learning tremendously (Gomez 2003,May 9). Since it is essential for classroom-based assessments to develop diagnostic information to be adjusted to students' specific needs, ESL teachers should use specific scoring methods to assess their students' writing for the classroom-based assessment. There are many types of scoring methods available for teachers to refer to when they assess their students' writing tasks. Each scoring method is different from others in the sense that each has different criteria for assessing students' writing products. For example, one scoring method looks at a student's writing product generally and does not go into detail when analysing the student's performance (the


assessment at nation-based examination.

in writing can cause problems for learners in the of teaching and leaming. As such, there should be some ways to overcome this problem so that there will be a balance between teachers' assessment in schools and national raters,

assessments. This balance is needed because the differences in ESL teachers' interpretation of students,

formative and summative

improvement. In other words, there should be a balance between

students, learning progress and school service

assessment approaches is very important because many decisions rest on writing assessment. It is imperative that decision

The need to provide sfudents with fair and

supportable makers,

holistic scoring method), whereas another scoring method goes into detail about the writing performance (the analytic scoring method). Hence ESL teachers can make use of the two scoring methods for classroom-based assessment so that they can look into their students' writing performances from different perspectives.



linglish l,anguagc Jourrral

'l'ypcs ol' l)ircct Writing because effective local assessment



Direct writing assessment has been accepted as performance-based assessment in educational assessment since early lgg}, but problems still exist due to differences in methods

is essential to improve studcnt


leaming, and that locally developed and administered assessment programs have a unique capacity to provide diagnostic information that, when understood and used effectitely, has immediate impacts
on classroom practice. Another researcher who shares the same view with Rabinowitz (2001) about balancing state and local assessment in America is

depending on the objectives of the assessment (Hamp_Lyons, 1990). Since it is universally accepted that writers urrd t"u"h"., differ in their interpretations, there should be a balance between classroom assessment and standardised assessment. There have been complaints about the education system in Malaysia being too exarn-oriented. "The education system is so exam-oriented that it has forced many students into rote-learning and memorising just to score. This is said to have greatly reduced creativity and our ability to understand and analyse things,,' (Darshan and Ong, 2003). Because of this problem the Ministry of Education in Malaysia started to introduce school-based oral assessment for all levels of secondary and primary schools. This was announced by the then Director-General of Education, after the closing of the National Assessment Seminar, organised by""."*ony the Malaysian Examinations Syndicate, held in May 2003. The Director General said that the decision was part of the Education Ministry's initiative to make the education system ress exam-oriented by focusing more on school-based assessments (Gomez 2003). Nonetheless the ministry has not suggested such a move in the assessment of writing. This paper proposes the same move towards school-based writing assessment in the Malaysian secondary schoors, specifically in the ESL direct writing performance. The issue about balancing classroom-based assessment with the nation-based assessment does not only apply to Malaysian schools. In America for instance, the issue about balancing state and locar assessment was raised by school administrators. state assessment in America refers to their govemment lever assessment, and rocal assessment is their school-based assessment. Rabinowitz (2001) in his review about balancing state and local assessments in American schools found that local assessment programs are still relevant



Stiggins (2002), who believes that the current assessment systems in American education is harming huge numbers of students and

that harm is directly from the failure to balance the use of standardized tests and classroom assessments in the service of school improvement. He also believes that student achievement suffers because the once-a-year tests are incapable of providing teachers with the moment-to-moment and day-to-day information about student achievement that they need to make crucial instructional decisions. Stiggins (2002) said that teachers rely on classroom assessment to make crucial instructional decisions. However the problem is that teachers are unable to gather or effectively use dependable information on student achievement each day, because of the drain of resources for excessive
standardised testing. There are no resources left to train teachers to create and conduct appropriate classroom assessments. For the

same reason, Stiggins (2002) stated that district and building administrators have not been trained to build assessment systems that balance standardised tests and classroom assessments. As a result of these chronic and long-standing problems, the classroom, building, district, state, and national assessment systems remain in constant crisis, and students suffer the consequences. Rabinowitz (2001) and Stiggins (2002) reviewed the local assessment and state assessment procedures in the American
schools. The local assessment involved teacher assessment at the school level, and the state assessment was the standardised testing

at the national level. Rabinowitz proposed that local and state assessment in America should not try to replicate each other but
should build systems that complement each other. He believed that

there should be an ideal relationship between local and state




rslr l,irrrgrrlgc .lourrral

'l'ypcs ol' I)ircct Writing

The eight-stage model does not permit a simple judgment, but involves two complementary forms of assessment to provide a balance. Stobart said that the complete coverage of the programme of study provided by teacher assessment offsets the restricted sampling of the standardised tests, and the standardised tests provided a level of standardisation, which wourd be difficult to achieve in teacher assessment. Stobart concluded his review by saying that the national curriculum assessment would prevent threats to validity only if teacher assessment and standardised tests are kept in

recommended that teacher assessment at the schoor lever should be balanced with standardised assessment at the national level. Stobart's review was based on the eight stages of threats to the validity model developed by Crooks, Kane and Cohen (1996). The main finding of the review is that varidity of Nationar curricurum Assessment hinges on the barance between teacher assessment and standardised testing at the national level.

fuily appreciate the pressurcs and responsibilities the other faced. Rabinowitz found that state assessment in America courd not yield detailed information necessary to target instructions for individual students. It was supposed to be the local assessment that developed diagnostic information about students' progress so that teachers could a'ddress students' specific needs. Stiggins stated that the current assessment systems in America were not maximizing sfudents' achievement due to the failure to balance the use of standardised tests with classroom assessments in the service of school improvement. Stiggins believed that both types of assessment are important because the standardised test is assessment of learning but the classroom assessment is for student learning. Th; crucial distinction between both assessments is that one is to determine the stafus of learning and the other is to promote greater learning. stobart (200r) reviewed the varidity of National curricurum Assessment of England as it operated in the year 2000. The National curriculum Assessment includes teacher assessment in schools and standardised assessment in England. Stobart strongry

assossrnent and that both shourd

This paper has taken into consideration the validity modcl ol'tlrc National Curriculum Assessment suggested by Stobart (2001). Tho validity model includes teacher assessment at the classroom levcl and standardised assessment at the national level. This balance is supposed to improve teaching and learning in the classroom as suggested by Rabinowitz (2001) and Stiggins (2002). Rabinowitz and Stiggins stated that the nation-based assessment, which is in the form of standardised tests, is already in place. The problem is with the classroom assessment which needs to be looked into for improvement. Both scholars had listed several criteria for the improvement of classroom assessment. Among the criteria given by Rabinowitz and Stiggins to improve classroom assessment so that it is balanced with the nation-based assessment, are four criteria relevant for direct writing assessment in Malaysian School Curriculum (see Figure l).

Stiggin's criteria

Improvements for Classroom Assessment

Rabinowitz's criteria

Link class-based

assessment to

the nation-based assessment standards so that class-based assessment can predict performance in nation-based
assessment. 2

Understand that teaching the achievement targets that students are to hit is necessary.

Provide information about more detailed assessments to all


Inform students about their own learning goals needed to perform well.

Provide alternative assessment tools to support teaching and leaming.

Become assessment literate in order to transform expectations into assessment exercises and scoring procedures that

accurately reflect students' achievement.


Devise class-based assessmen t regardless of the instruments


Translate classroom assessment results into frequent descriptive feedback for students.


linglish Languago Journal

'l'ypcs ol' Dircct Writing


The National Assessment of Educational progress (NAEP, 2001) developed six goars of writing assessment that provided a framework for the basis of the tggs writing Assessmeni. rrr" gout.

Pool, 2001: 345). Despite these concerns Weigle (2002:46) sees writing assessment o'as a means to test language performance". She
describes performance assessment as any assessment procedure that



2. Writing a variety of tasks 3' writing from a variety of stimurus


Writing for a variety of purposes


materials within various time constraints Using the writing process in developing a product Displaying effective choices in the organization of writino Valuing writing as communication.

involves either the observation of behaviour in the real world or a simulation of a real world activity. She insists that any writing test that involves actual writing, as opposed to completing multiplechoice items, for example, can be considered a performance test, since the written product represents a performance of writing. Furthermore Weigle (2002: 14-22) sees "writing as a productive skill that has high social and cultural values as compared to the other productive skill of speaking". She finds that one of the most important distinctions between writing and speaking is that writing is highly valued in educational settings, and the standardisation of

The goals of writing assessment given by NAE' (2001) cover a few types of writing that students normally io in schootr, in.tuJrrrg tt" three types of direct writing p"rfo.man"e
Papers for the SpM examination.

in the Engiish ran-giruge

in writing is frequently more accuracy in speaking" (Weigle, 2002: l7). important than To ensure that a writing test can be considered as a performance test, the tasks given in the language tests should be able to predict individuals' performance in language use. In other words there
writing means that "accuracy
should be some characteristics that determine the collaboration between language use and language test. Bachman and Palmer (1996: 10-ll) specify "two sets of characteristics that affect both language use and language test performance, which are of central interest, and which determine the correspondence between language test performance and language use". First are the characteristics of individuals who are the language users and second are the characteristics of tasks. The two sets of characteristics that affect both language use and test performance, as brought up by Bachman and Palmer, show that the tasks given in language testing should be applicable to language use in real life situations. Thus individuals involved in taking language tests should be able to achieve a certain goal and objective in real life. So classroom teachers should take into consideration of these two characteristics when they are designing the tasks used to test for
language users. To ensure that there is a collaboration of language users and language tasks as suggested by Bachman and Palmer there should

writing to create valid assessments of writing. However theories about what constitutes good writing are drawn from many disciplines and discussions and have caused the theories to be complex and multidimensional. "These disciprinary perspectives reflect diverse and sometimes competing positions.,, (Ketter and

choice examinations with a re-conceptuarisation or t"u"rringLa learning as a richer, context-bound experience that does noi rely solely on rote skills. Test creators have to agree about what characterises good

based assessments are berieved to be more consistent than multiple-

Milrer and Linn (2000) found that there has been an increase in the use of performance-based assessment in educational ;;;*, at the state level in America and that there are many reasons for the use ofperformance-based assessment as compared to the traditional multiple-choice assessment. one of the reasons is the concern about the possible unintended negative effects of multiple-choice assessments, such as the belief that they lead to a narrower curriculum and teaching to the test. Apart from that, performance_


ljngl ish Languugc Journal Typcs ol' Direct Writing be a set of qualities to measure the effectiveness of writing. This set is for teachers to be abre to test the ranguage users' performance. Ketter and pool (2001:345-346) believe that ..Most theorists of the composing process can be said to accept four basic tenets about the nature of writing and the quarities of effective writing: l. writing is an act of interpretation, 2. writing is historicaily determined and situationally constrained, 3. writing necessarily invorves the making and remaking of serves, and.4.Effective w.iting is meaning making that involves both the writer and the reader in a discovery process". So teachers who want to make inferences about their students' language ability and to make decisions based on these inferences should take into consideration these four tenets about the nature of writing and the qualities of effective writing. Making inferences about students' language ab'ity and then making decisions based on the inferences are crucial for students, future undertakings. Weigle (2002:40) has quoted Bachman and Palmer (1996) as saying that ..the primary purpose of language is to make inferences about language ability and the "testing

tests play a powerful role in many people,s lives, actiig as gateways at important transitional moments in education, in employment, and in moving from one country to another.,, so it is expected that the tests resurts are able to herp peopre to achieve their ambitions in real life situation. To ensure that students achieve ambitions in rear rife situation through language tests, teachers must make sure that the tasks given in the tests enable the students to achieve the objectives of taking the tests. To achieve a particular goal or objective in a particular situation, Bachman and palmer (1996: 44) believe that .,it is possible to identify certain distinguishing characteristics of language use tasks and to use these characteristics to describe a

language use and the essentials of measurement to be made about the language ability." As stated by McNamara (2000: 4) .,language

about language ability, Weigle (ZOOZ,4I) refers to the ,.ability tested as a construct in which factors in it involve

secondary purpose is to make decisions based on those inferences.,, Since the primary purpose of language testing is to make inferences

target language use domain." This is because in language testing, the primary purpose is to make inferences about test takers' language ability, but not to generalise to all language use domains. Therefore, they define the target language domain as a set of specific language use tasks that the test taker is likely to encounter outside of the test itself and tasks that inferences about language ability can be generalised to. There are two general types of target language use domains that are of interest to the development of language tests. The first type is the real-life domain in which language is used essentially for purposes of communication. The second type is the language instruction domain that consists of situations in which language is used for the purpose of teaching and learning of language. The two general types of target language use domains as stated by Bachman and Palmer are important in language testing in order to test language ability. Weigle (2002: 46) sees these domains included in the "construct of writing tests." Thus she has introduced the notion of performance assessment as one way to bring Bachman and Palmer's conceptualisation of language use and language ability into clearer focus for writing assessment.

rear world

Students' performance in direct writing is very important to assess their writing skills. The students' writing performance in the SPM Examination is assessed by the expert raters who are trained by the Malaysian Board of Examination. ESL teachers who teach the students are responsible for the classroom assessment of the students' writing. As stated by Stobart (2001) the validity model of the National Curriculum Assessment should include teacher assessment at the classroom level and standardised assessment at the national level. Thus there should be a balance between

classroom assessment and standardised testing of students' direct writing. This balance is supposed to improve teaching and learning in the classroom as suggested by Rabinowitz (2001) and Stiggins


lirrglish Languagc Journal

'l'ypcs ol' l )ircct Writing

Ketter, J. and Pool, J. (2001). Exploring the Impact of a l-ligh-Stakes l-)irect ll'riting Assessment in Two High School Classrooms in Research in lhc
'l'.P. Lucisano P. and Kadar-Fulop J. (1988 ). The summary tasks in Gorman, Purves A.C. & Degenhart R.E. (1988). The IEA Study of Written Composition l: The International lV'riting Tasks and Scoring Scales. Oxford: Pergamon
Press. Teach i ng of Engl is h : -F eb 200 1 ;

crassroom Assessment concepts & Apprication 4th Ed. Boston: McGraw-Hill Higher Education. Bachman' L'F' and palmer, A.s. (l996). Language Testing in practice: Designing And Deveroping Usefur Language zesrs. 6xdrd: oxfo*ra uniu"Jf r*r.


P'w' (2001).


; Proquest Education Journals.

Wadsworth: Heinle and Heinle publishers. Cooper, C.R. and Odell, L. (!9]7). Evaluating ll/.riting: Destibing, Measuring, Judging. Buffalo: National Council of Teachers of English. Crehan, K.D. and Hudson, R. (2001). A comparison of two scoring strategies

Language pedagog,,.2nd edition. New york: Addison wesley Longman, Inc. cohen, A.D. (1994). Assessing Language Abirity in the crassroom. 2nd Edition.

Brown, H.D. (2001). Tea3hinS by principles: An interacrir" opproo"n ,o

Malaysian Board of Examination (2004). Format Pentaksiran Bahasa Inggeris SPM 2004. Kuala Lumpur: Kementerian Pendidikan Malaysia. McNamara, T. (2000). Language Testing. Oxford: Oxford University Press' Miller, M.D. and Linn, R.L. (2000). Validation of performance-bcsed assessments in Applied Psychological Measurement, Dec. 2000. Thousand Oaks: Sage
Publication. National Assessment of Educational Progress (2001). The NAEP narrativewriting scoring guides in Gifted Child Today.Waco: Prufrock Press; Summer 2001.

for performance assessments in Educationai Researcrh euarterly. we., tiorr.o". Dec.

Oller J.W. Jr. and Perkins K. (1980). Research in Language

Massachusetts: Newbury House Publishers, Inc.


Darshan, S. and ong, M.L. (2003, May 9). Mixed reaction to new exam move. New Straits Times, pp. 3, l_3. Gomez, G. (2003, yuf Less exam_centred system. TheStar, pp.3. ?. Goinez, G. (2003, May 9). School_based urr".r-"n, for all levels. TheStar, pp.l2. Gorman, T.P., purves Af. and Degenhart R.E. (1988). The IEA Stray oiiiitt"n Composition I, The International writing T)sks and scoring scares. oxford:
Pergamon press

Rabinowitz, S. (2001). Balancing state and local assessments in School Administrator. Arlington: American Association of School Administrators; Dec.2001 Vol.58; Issue 11.
Serafini, F. (2001). Three paradigms ofassessment: Measurement, procedure,and inquiry in The Reading Teacher; Dec 2000iJan 2001; Newark: Intemational Reading Association. Stiggins, R.J. (2002). Assessment crisis: The absence ofassessmentfor learningln Phi Delta Kappan; Bloomington; June 2002; Vol. 83; Issue 10. Stobart, G. (2001). The vatidity of National Curriculum assessment in British Journal ofEducational Studies, ISSN 0007-1005 Vol. 49 No.1, March 2001' pp 26-39.

(Vol 5).

Hamp-Lyons, L. (1990). second tlnguage writing; assessmentissaes in Kroil, B (Fd)'

and poricy analysis. Falr 200r. washington: American Educationar Research Association. Hedge, T. (2000). Teaching and Learning in the Language Classroom. oxford: Oxford University press. Henning' G. (1987). A luide to Language Testing. Deveropment. Eyaruation. Research. Boston: Heinle & Heinle pub-lishers. Henning,T.B. (2002). Beyond standardized testing: A case study in assessment,s transformative power in Engtish Leadership e:uarterty; Feb 2002. vot- z+, tss

Hayes, Hatch and Silk (2000). Does holistic ossessment predict writing performance?: Estimating the consistency of students ierformance- on holistically scored writing assignments in'written communicatiozl; Beverry Hills; Jan 2000; Vol l7; Issue l. Heck' R'H' and crislip, M. (200r). Direct and indirect writing assessment: Examining issues of equity and ut,ity in Educationar Evaruation

Cambridge: Cambridge University press.

second ranguage writing: Researci insights

for the crassroom.

Swartz, C.W., Hooper, S.R., Montgomery, J.W., Wakely, M.B', et al (1999). (Jsing generalizability theory to estimate the reliability of writing scores derived from holistic and analytical scoring methods in Educational and Psychological Measurement;Durham; Jun 1999; Vol. 59, Iss. 3; pg. 492. Sweet, S.A. and Martin, K.G. (2003). Data analysis with SPSS - A First Course in Applied Statistics.2nd Ed. Boston: Pearson Education, Inc. Takala, S. (1988). Origins of the International Study of Writing in Gorman, T'P' Purves, A.C. & Degenhart, R.E. (1988). The IEA Study of lVritten Composition l: The International Writing Tasks and Scoring Scales. Oxford: Pergamon


J. Uroana

T*n:r: A Cambridge

(2003). Testingfor Language Teachers.2nd Ed. Cambridge: University press.


Johnson, Penny and Gordon Q00l). score resolution and the interrater

of holistic scores in rating April 2001; Vol. 18, Iss.2.

in w'ritten communication. Beverly Hills.


A. (1938). The Domain of School Writing and Development of the Writing Tasks in Gorman T.P., Purves A.C. & Degenhart R.E. (1988). The lF'A Study of Written Composition l: The Intemational Writing Tasks and Scoring Scales. Oxford: Pergamon Press. Ward, A.W. and Murray, M. (1999). Assessment in the Classroort. Belmont:
Wadsworth Publishing Company.

Weigle, S.C. (2002). Assessing ll/riting. (Series Editors: Alderson, J.C. antl
Bachman, L.F.) Cambridge: Cambridge University Press.