Vous êtes sur la page 1sur 12

Running head: ASSESSING THE WRITING ABILITY OF ENTERING LEARNERS

Assessing the Writing Ability of Entering Learners Unit 4 Individual Project Kelvin Bell American InterContinental University EDU602-1203A-01 October 13, 2012 Dr. Perry Spann

ASSESSING THE WRITING ABILITY OF ENTERING LEARNERS Abstract

Even in this age of high-speed, high-tech communications, one cannot overstate the importance of writing skills, even at the community college level. Unfortunately, too many of the students arriving are woefully lacking in the traditional writing skills needed away from their high-tech devices. By some estimates, approximately 50% of all freshmen start at community colleges but more than 50% of these students should be in remedial English classes due to an overall decline in student preparation and a lack of basic writing skills (Bers & Smith, 1990). This report provides information on how traditional and non-traditional methods can be used to reasonably assess these individuals. The assessments specified can distinguish those individuals whose capabilities are sufficient to enter a regular writing course from those who need to be directed toward a developmental writing course in an effort to improve their writing skills and help them succeed at the community college level and beyond. Among the recommendations are structures that are applicable to both students whose native language is English and those for whom it is not.

ASSESSING THE WRITING ABILITY OF ENTERING LEARNERS Assessing the Writing Ability of Entering Learners

Even in this age of high-speed, high-tech communications, one cannot overstate the importance of writing skills, even at the community college level. Unfortunately, too many of the students arriving are woefully lacking in the traditional writing skills needed away from their high-tech devices. By some estimates, approximately 50% of all freshmen start at community colleges but more than 50% of these students should be in remedial English classes due to an overall decline in student preparation and a lack of basic writing skills (Bers & Smith, 1990). This report provides information on how traditional and non-traditional methods can be used to reasonably assess these individuals. The assessments specified can distinguish those individuals whose capabilities are sufficient to enter a regular writing course from those who need to be directed toward a developmental writing course in an effort to improve their writing skills and help them succeed at the community college level and beyond. Purpose and Functionality While there are those who might say that the need (and the time) for correct and effective writing skills has passed, there are a significant number of individuals, managers, and workforce evaluators who still value writing skills at a high level. These include the predominance of baby boomers who still occupy the higher levels of administration of most major companies and are, in growing numbers, the heads of their own smaller, start-up firms making employment opportunities available to college and community college graduates (Huff Post, 2012) as well as making employment decisions. Where in the past, trained secretaries and administrative assistants frequently took instruction or dictation and polished communications before handing them back for signatures, most correspondences now go out by means of personally written

ASSESSING THE WRITING ABILITY OF ENTERING LEARNERS

email communications without benefit of review. As such, the sender will be directly evaluated on the information the recipient reviews. Thus, writing skills are, indeed, important for everyone. Increasingly, colleges of all types, and especially community colleges, are being faced with issues of accountability with regard to the success and the preparation of the graduating students. Yet, both the preparation and the success, in many cases, are in relation to studentpreparation and initial training at the outset of their college careers. Increasingly, colleges are finding that many students arrive ill-prepared for college-level writing and face the resulting consequences. Additionally, community college students are increasingly comprised of students whose native language is not English (Crandall & Sheppard, 2004) and those students, who may either be returning to school after long absences (due to rearing families or for retraining due to economic changes, career changes necessitated by business closures or outsourcing, or even just for personal, unfulfilled desires, for which they have determined, Now, is the right time! Regardless of the specific reasons, it is necessary for the community college to be able to correctly assess the writing skills levels of the entering freshman class to determine whether they need a developmental writing class or whether their command of language is sufficient for a regular writing class. To this end, this paper is written to assist in the overall evaluation process by reviewing possible traditional and nontraditional assessment instruments, their strengths and weaknesses, their impact on the selected sample population, and for recommending procedural implementation and the prospects of success in college (Davidson, 1976) and broaden their capabilities. Possible Assessment Instruments Career Education Corp. (2010) suggests possible traditional and non-traditional means of assessments. These include:

Running head: ASSESSING THE WRITING ABILITY OF ENTERING LEARNERS Traditional Multiple Choice Tests Whole Answer Tests Problem Solutions Short Answer Essay Tests Portfolio Assessments Expressive and analytical writing Artwork Performance Non-Traditional

Authentic Assessment Alternative Assessment Performance-based Assessment

From a practical standpoint, in order to meet its purposes, the assessment instrument needs to be able to be administered to large numbers of people within a reasonable period of time, preferably under an hour, and scored accurately and affordably. However, since the requested purpose of this specific instrument is to determine one of two entry points for freshmen at this community college, the instrument need only quantify the students mastery level as a summative evaluation (Bell, 2012a, b; Creswell, 2012; Driscoll, 2005; Elbow, 2000) on the basis of whether to place them in the developmental writing class or the regular writing class, as all freshmen will be taking one class or the other. However, as data collection tools which will help provide a better understanding concerning the statistics of the entering classes, accountability, fairness, and reliability & validity, it is recommended that records of the students age, sex, and native language be codified along with the students assigned-class status based on the exam. Reliability and Validity

ASSESSING THE WRITING ABILITY OF ENTERING LEARNERS

Creswell (2012) cites reliability and validity as being key components of any instrument of measurement. As such, he notes that reliability is related to the consistency (stability) with which one can depend on the results (scores) from one test to another and from one time frame to another. On the other hand, he cites validity as being related to how well the test or assessment instrument measures what it was designed to do. To solve both of these issues successfully in the context of evaluating writing is a bit of a challenge because of the inherent challenges related to the observer/evaluator. It is not so difficult to assign a grade or placement scoring as it is to make sure that the scoring is fair and unbiased in its approach as well as having inter-rater reliability at the same time (Beyreli & Ari, 2009; Creswell, 2012). On the one hand there are those who support the use of rubrics as a means evaluating everyone on a fair and objective basis. However, even rubric-assessments have their pitfalls with regards to reliability, most especially where there hasnt been a detailed amount of inter-rater training (Fang, Z. & Wang, Z., 2011). In this case, reliability comes down to how closely trained the raters are to be in tune with the ratings criteria. Fang & Wang (2011) note the challenges of discerning between what is meant within rubric descriptions when often there is no appreciable description. From the sample rubric they included, they ask what is really meant by such terms as, Exceptionally clear, focused, and engaging in relation to Ideas and content? Will inter-raters agree that the Voice is Expressive, engaging, sincere? What constitutes High degree of craftsmanship or that Word Choice is Precise, carefully chosen? In these cases (they think most cases) the evaluator is left on his or her own to establish this criteria, reflectively from their own experience, rather than from a specific standard that is exacting to every rater. Thus, what may mean one thing to one rater may not mean exactly the same to another and might be somewhat subjective from one case

ASSESSING THE WRITING ABILITY OF ENTERING LEARNERS

to another. This weakness can be overcome with extensive training so as to clarify among all raters, the exact meaning of a rubric assessments terminology. In contrast, Elbow (2000) notes that, Portfolios improve validity by giving a better picture of students writing abilities, by giving an opportunity for multiple pieces of work that are done across several periods of time to be evaluated. The challenge he notes here is that as reliability improves, validity suffers. The process gives a broader assessment and a better picture of the writers abilities in various areas, times, and writing styles, but is likely to increase scoring disparities among inter-raters, thus affecting validity scores. To counter this effect, Beyreli & Ari (2009) analyze and correlate the differences of such inter-rater effects to make necessary adjustments as shown in Table 1, below.

The use of portfolios may still be of benefit for certain applications, but may not be the most effective for the purposes of selecting the two sample populations required relative to time

ASSESSING THE WRITING ABILITY OF ENTERING LEARNERS

constraints for the evaluation process as well as from the standpoint of those whove either been out of school for extended periods of time or whose primary language may not be English. Bers & Smith (1990) point out that because of the pressure on community colleges to meet the needs of diverse populations, including those for whom English is not their primary language, adult learners, and those attending community college as a prep school or interim location to a 4-year college, assessment programs must still be highly accountable and that holistic scoring is most appropriate (Bers & Smith; De Ayalla, Dodd, & Koch, 1989). To Bers & Smith, Holistic Scoring refers to assigning an essay a global rating of overall quality rather than a set of dimensions. Through extensive training of the raters and having two readers score each essay on a scale of 1 to 6, they cite agreement with Coopers (1977) assertion that this remains the most valid and direct means of ran-ordering students by writing ability. However, once again, because the evaluation that is needed for this assessment program needs to be able to be implemented and evaluated in reasonable amounts of time and at minimal extraneous expense (as with most anything related to schools), this may not be the best option either. On the other hand, for the purposes intended (validity) and for the best offer of reliability, a test that can be administered within an hour, to masses of incoming students, and graded somewhat objectively based on the skill levels represented on paper during that hour, that combines short essays with fill-in-the-blanks grammatical assessment might be the best solution to the challenge. Fairness; Eliminating Bias With the typically diverse populations entering into community colleges, the need for fairness and sensitivity toward bias is of concern. While the direct charge of community colleges is to educate the general public regardless of their background, in the interest of effectiveness

ASSESSING THE WRITING ABILITY OF ENTERING LEARNERS

10

and efficiency, there are still certain standards to uphold. The doctrine of fairness is among them. To this end, the development of the assessment tool needs to take into account those for whom English is not their native language and those for whom proper English is not their customary language. As such, the elements of the assessment need to provide sufficient time for those being assessed to be able to effectively understand and communicate their answers in the best light of their skills. Rather than extending the time that is allotted to these students, its likely a better idea to extend the time for the overall assessment and/or reduce the number of questions and writing samples required. Note, however, because genre (Davidson, 1976) and other elements come into play, an approach that incorporates some elements of holistic scoring should be considered with two inter-raters. Davidson suggests that problems associated with bias created by free composition and multiple-choice assessments can be overcome with an assessment structured to give feedback concerning the control of significant structures of subordination as a predominant measure of college-level writing. With this in mind, he recommended the use of an instrument that: (1)Measures discrete, significant grammatical elements of writing ability for diagnostic as well as placement purposes; (2) requires students to actively employ the language elements being tested for; (3) can be easily and consistently graded; and (4) allows sufficient time for respondents to indicate their ability. Pike (1989), on the other hand, notes that his research of the American College Test College Outcome Measures Program (ACT-COMP) showed elements of bias against blacks among this assessment (and thus, likely, also against those for whom English is not their native language). However, because of the narrow banding of the construct being decided (whether to enroll students in a developmental writing class or a regular writing class), elements of this test may still be useful for consideration, especially since some of the componentry specifically addresses writing skills.

ASSESSING THE WRITING ABILITY OF ENTERING LEARNERS

11

Conclusion In conclusion, there are different avenues by which this particular assessment can be approached. However, the best approach will allow for greater latitude in terms of the number of students for whom the information is reliable and valid within the least amount of time in which to complete the examination/assessment and provide for the evaluation. This can even include the prospect of handing out written material on which the test (or portions of it) will be based (Horton & Diaz, 2011) that are both in Standard English and the predominant non-English language(s) prior to the assessment. While the assessment will account for the students writing skills in English, offering the English and non-English translations, will aid in the comprehension of the material and help balance the tables against language disparities. If students object to being classified as needing the developmental English class, a provision can always be made for more accountability through a more comprehensive assessment tool.

ASSESSING THE WRITING ABILITY OF ENTERING LEARNERS References Bell, K. (2012a). Kelvin Bells motivation and self-regulation in learning. Unit 5 Individual project: EDU622-1202A. American InterContinental University Online.

12

Bell, K. (2012b). Compare and contrast assessment and evaluation. Unit 4 Individual project: EDU604-1202B. American InterContinental University Online. Bers, T. H. & Smith, K. E. (1990). Assessing assessment programs: The theory and practice of examining reliability and validity of a writing placement test. Community college Review, 18(3), 17. Beyreli, L., & Ari, G. (2009). The use of analytic rubric in the assessment of writing performance Inter-rater concordance study. Educational Sciences: Theory and Practice, 9(1), 105-125. Career Education Corp. (2012). Assessing Student Performance. Retrieved from https://mycampus.aiu-online.com/courses/EDU602/u4/hub1/hub.html Crandall, J., & Sheppard, K. (2004). Adult ESL and the community college. New York: Council for Advancement of Adult Literacy. Retrieved from http://caalusa.org/eslreport.pdf Creswell, J. W. (2012). Educational research: Planning, conducting, and evaluating quantitative and qualitative research. (4th ed.). Boston: Massachusetts. Pearson Education, Inc. Davidson, D. M. (1976). Assessing Writing Ability of ESL College Freshman. De Ayalla, R. J., Dodd, B. G., & Koch, W. R. (1989). A comparison of the graded response and partial credit models for assessing writing ability. Retrieved from ERIC. Driscoll, M. P. (2005). Psychology of learning for instruction. (3rd ed.). Boston: Pearson Education, Inc.

ASSESSING THE WRITING ABILITY OF ENTERING LEARNERS Elbow, P. (2000). The conflict between reliability and validity. Everyone Can Write: Essays toward a hopeful theory of writing and teaching writing. Retrieved from American InterContinental University Online Library and

13

http://books.google.com/books?hl=en&lr=&id=t2ypJjLMNrYC&oi=fnd&pg=PR9&dq= Elbow,+P.+%282000%29.+The+conflict+between+reliability+and+validity&ots=Ri8PX vPHHH&sig=1Ogt-9OPdKptOVn5PhDrJVudbzo#v=onepage&q&f=false Fang, Z., & Wang, Z. (2011). Beyond rubrics: Using functional language analysis to evaluate student writing. Australian Journal Of Language & Literacy, 34(2), 147-165. Horton, E., & Diaz, N. (2011). Learning to write and writing to learn social work concepts: Application of writing across the curriculum strategies and techniques to a course for undergraduate social work students. Journal Of Teaching In Social Work, 31(1), 53-64. Huff Post. (2012). Boomers who start businesses: The next generation of entrepreneurs. Retrieved from http://www.huffingtonpost.com/2012/01/09/boomers-who-startbusinesses_n_1185394.html Pike, G. (1989). The performance of black and white students on the ACT-COMP exam: An analysis of differential item functioning using Samejima's graded model. Research Report 89-11. University of Tennessee: Knoxville.

Vous aimerez peut-être aussi