Vous êtes sur la page 1sur 20

Lecturer:

Rizky Gushendra, M. ED

Language Assessment
Assessing Writing

Name of members:
Suhriadi
Suciati Anandes
Endra Heriyanto
Jastri Permata Sari
Mira Ayu Defitri
Rifka Zahera
Sispa Deni

State Islamic University Of Sultan Syarif Kasim Riau


Faculty Of Education And Teachers Training
English Education Department
2015
11

CHAPTER I
A. Introduction
The new assessment culture aims at assessing higher order thinking processes and
competences instead of factual knowledge and lower level cognitive skills, which has led to a
strong interest in various types of performance assessments. This is due to the belief that
open-ended tasks are needed in order to elicit students higher order thinking.
According to Black (1998: 87), performance assessment deals with activities which
can be direct models of the reality, and some authors write about authentic assessment and
tasks relating to the real world. The notion of reality is not a way of escaping the fact that
all learning is a product of the context in which it occurs, but rather to try to better reflect the
complexity of the real world and provide more valid data about student competence. As a
consequence, performance assessments are designed to capture more elusive aspects of
learning by letting the students solve realistic or authentic problems.
Performance assessment consists of two parts: a task and a set of scoring criteria or a
scoring rubric (Perlman, 2003). In here, the assessor use rubric in assessing writing. Rubrics
are tools for evaluating and providing guidance for students writing. Andrade (2005) claimed
that rubrics significantly enhance the learning process by providing both students and
instructors with a clear understanding of the goals of the writing assignment and the scoring
criteria.
Rubrics used in many subject areas in higher education generally include two
elements: (a) a statement of criteria to be evaluated, and (b) an appropriate and relevant
scoring system (Peat, 2006). Rubrics can be classified as either holistic or analytic (Moskal,
2000). Holistic rubrics award a single score based on the students overall performance,
whereas analytic rubrics give multiple scores along several dimensions. In analytic rubrics,
the scores for each dimension can be summed for the final grade. Although an advantage of
the holistic rubric is that papers can be scored quickly, the analytic rubric provides more
detailed feedback for the student and increases consistency between graders (Zimmaro,
2004). Regarding to the explanation above, the holistic rubric will be used in this writing
assessment.
Regardless of holistic rubric format, when used as the basis of evaluating student
performance, a rubric is a type of measurement instrument and, as such, it is important that

11

the rubric exhibits reliability (i.e., consistency of scores across repeated measurements) and
validity (i.e., the extent to which scores truly reflect the underlying variable of interest).
Although reliability and validity have been noted as issues of concern in rubric development
the reliability and validity of grading rubrics seldom have been assessed, most likely due to
the effort and time commitment that is required to do so.

Reliability

In order for a holistic rubric scoring system to be of any value, it must be shown to be
reliable; Reliability refers to the consistency of assessment scores. For example, on a reliable
test, a student would expect to attain the same score regardless of when the student completed
the assessment, when the response was scored, and who scored the response. On an
unreliable examination, a student's score may vary based on factors that are not related to the
purpose of the assessment.
Many teachers are probably familiar with the terms "test/retest reliability,"
"equivalent-forms reliability," "split half reliability" and "rational equivalence reliability"
(Gay, 1987). Each of these terms refers to statistical methods that are used to establish
consistency of student performances within a given test or across more than one test. These
types of reliability are of more concern on standardized or high stakes testing than they are in
classroom assessment. In a classroom, students' knowledge is repeatedly assessed and this
allows the teacher to adjust as new insights are acquired.
The two forms of reliability that typically are considered in classroom assessment and
in rubric development involve rater (or scorer) reliability. Rater reliability generally refers to
the consistency of scores that are assigned by two independent raters and that are assigned by
the same rater at different points in time. The former is referred to as "inter-rater reliability"
while the latter is referred to as "intra-rater reliability."
-

Inter-rater Reliability
Inter-rater reliability refers to the concern that a student's score may vary from rater to
rater. Students often criticize exams in which their score appears to be based on the subjective
judgment of their instructor. For example, one manner in which to analyze an essay exam is
to read through the students' responses and make judgments as to the quality of the students'
written products. Without set criteria to guide the rating process, two independent raters may
not assign the same score to a given response. Each rater has his or her own evaluation

11

criteria. Scoring rubrics respond to this concern by formalizing the criteria at each score level.
The descriptions of the score levels are used to guide the evaluation process. Although
scoring rubrics do not completely eliminate variations between raters, a well-designed
scoring rubric can reduce the occurrence of these discrepancies.
-

Intra-rater Reliability
Factors that are external to the purpose of the assessment can impact the manner in
which a given rater scores student responses. For example, a rater may become fatigued with
the scoring process and devote less attention to the analysis over time. Certain responses may
receive different scores than they would have had they been scored earlier in the evaluation.
A rater's mood on the given day or knowing who a respondent is may also impact the scoring
process. A correct response from a failing student may be more critically analyzed than an
identical response from a student who is known to perform well. Intra-rater reliability refers
to each of these situations in which the scoring process of a given rater changes over time.
The inconsistencies in the scoring process result from influences that are internal to the rater
rather than true differences in student performances. Well-designed scoring rubrics respond to
the concern of intra-rater reliability by establishing a description of the scoring criteria in
advance. Throughout the scoring process, the rater should revisit the established criteria in
order to ensure that consistency is maintained.

Validity
Validation is the process of accumulating evidence that supports the appropriateness
of the inferences that are made of student responses for specified assessment uses. Validity
refers to the degree to which the evidence supports that these interpretations are correct and
that the manner in which the interpretations are used is appropriate (American Educational
Research Association, American Psychological Association & National Council on
Measurement in Education, 1999). Three types of evidence are commonly examined to
support the validity of an assessment instrument: content, construct, and criterion. This
section begins by defining these types of evidence and is followed by a discussion of how
evidence of validity should be considered in the development of scoring rubrics.

Content-Related Evidence

11

Content-related evidence refers to the extent to which a student's responses to a given


assessment instrument reflects that student's knowledge of the content area that is of interest.
For example, a history exam in which the questions use complex sentence structures may
unintentionally measure students' reading comprehension skills rather than their historical
knowledge. A teacher who is interpreting a student's incorrect response may conclude that the
student does not have the appropriate historical knowledge when actually that student does
not understand the questions. The teacher has misinterpreted the evidencerendering the
interpretation invalid.

Construct-Related Evidence
Constructs are processes that are internal to an individual. An example of a construct
is an individual's reasoning process. Although reasoning occurs inside a person, it may be
partially displayed through results and explanations. An isolated correct answer, however,
does not provide clear and convincing evidence of the nature of the individual's underlying
reasoning process. Although an answer results from a student's reasoning process, a correct
answer may be the outcome of incorrect reasoning. When the purpose of an assessment is to
evaluate reasoning, both the product (i.e., the answer) and the process (i.e., the explanation)
should be requested and examined.

Criterion-Related Evidence
The final type of evidence that will be discussed here is criterion-related evidence.
This type of evidence supports the extent to which the results of an assessment correlate with
a current or future event. Another way to think of criterion-related evidence is to consider the
extent to which the students' performance on the given task may be generalized to other, more
relevant activities (Rafilson, 1991).

11

CHAPTER II
A. Report

Reliability Concerns in Rubric Development

Clarifying the scoring rubric is likely to improve both interrater and intrarater
reliability. A scoring rubric with well-defined score categories should assist in maintaining
consistent scoring regardless of who the rater is or when the rating is completed. The
following questions may be used to evaluate the clarity of a given rubric: 1) Are the scoring
categories well defined? 2) Are the differences between the score categories clear? And 3)
Would two independent raters arrive at the same score for a given response based on the
scoring rubric? If the answer to any of these questions is "no", then the unclear score
categories should be revised.
One method of further clarifying a scoring rubric is through the use of anchor papers.
Anchor papers are a set of scored responses that illustrate the nuances of the scoring rubric. A
given rater may refer to the anchor papers throughout the scoring process to illuminate the
differences between the score levels.

11

After every effort has been made to clarify the scoring categories, other teachers may
be asked to use the rubric and the anchor papers to evaluate a sample set of responses. Any
discrepancies between the scores that are assigned by the teachers will suggest which
components of the scoring rubric require further explanation. Any differences in
interpretation should be discussed and appropriate adjustments to the scoring rubric should be
negotiated. Although this negotiation process can be time consuming, it can also greatly
enhance reliability (Yancey, 1999).
Another reliability concern is the appropriateness of the given scoring rubric to the
population of responding students. A scoring rubric that consistently measures the
performances of one set of students may not consistently measure the performances of a
different set of students. For example, if a task is embedded within a context, one population
of students may be familiar with that context and the other population may be unfamiliar with
that context. The students who are unfamiliar with the given context may achieve a lower
score based on their lack of knowledge of the context. If these same students had completed a
different task that covered the same material that was embedded in a familiar context, their
scores may have been higher. When the cause of variation in performance and the resulting
scores is unrelated to the purpose of the assessment, the scores are unreliable.
Sometimes during the scoring process, teachers realize that they hold implicit criteria
that are not stated in the scoring rubric. Whenever possible, the scoring rubric should be
shared with the students in advance in order to allow students the opportunity to construct the
response with the intention of providing convincing evidence that they have met the criteria.
If the scoring rubric is shared with the students prior to the evaluation, students should not be
held accountable for the unstated criteria. Identifying implicit criteria can help the teacher
refine the scoring rubric for future assessments.

Validity Concerns in Rubric Development

Concerns about the valid interpretation of assessment results should begin before the
selection or development of a task or an assessment instrument. A well-designed scoring
rubric cannot correct for a poorly designed assessment instrument. Since establishing validity
is dependent on the purpose of the assessment, teachers should clearly state what they hope to
learn about the responding students (i.e., the purpose) and how the students will display these

11

proficiencies (i.e., the objectives). The teacher should use the stated purpose and objectives to
guide the development of the scoring rubric.
In order to ensure that an assessment instrument elicits evidence that is appropriate to
the desired purpose, Hanny (2000) recommended numbering the intended objectives of a
given assessment and then writing the number of the appropriate objective next to the
question that addresses that objective. In this manner, any objectives that have not been
addressed through the assessment will become apparent. This method for examining an
assessment instrument may be modified to evaluate the appropriateness of a scoring rubric.
First, clearly state the purpose and objectives of the assessment. Next, develop scoring
criteria that address each objective. If one of the objectives is not represented in the score
categories, then the rubric is unlikely to provide the evidence necessary to examine the given
objective. If some of the scoring criteria are not related to the objectives, then, once again, the
appropriateness of the assessment and the rubric is in question. This process for developing a
scoring rubric is illustrated in Figure 3.
Figure 3. Evaluating the Appropriateness of Scoring Categories to a Stated Purpose
Step 1

Step 2

Step 3

State the
Develop
Reflect on the following:
assessment
score criteria
Are all of the objectives measured through the
purpose and
for each
scoring
Reflecting on the purpose and the objectives
of criteria?
the assessment will also suggest
the
objective
Is any scoring criteria unrelated to the objectives?
which forms of evidence - content, construct, and/or criterion - should be given consideration.
objectives.
If the intention of an assessment instrument is to elicit evidence of an individual's knowledge
within a given content area, such as historical facts, then the appropriateness of the contentrelated evidence should be considered. If the assessment instrument is designed to measure
reasoning, problem solving or other processes that are internal to the individual and,
therefore, require more indirect examination, then the appropriateness of the construct-related
evidence should be examined. If the purpose of the assessment instrument is to elicit
evidence of how a student will perform outside of school or in a different situation, criterionrelated evidence should be considered.
Being aware of the different types of evidence that support validity throughout the
rubric development process is likely to improve the appropriateness of the interpretations
when the scoring rubric is used. Validity evidence may also be examined after a preliminary
rubric has been established. Table 1 displays a list of questions that may be useful in

11

evaluating the appropriateness of a given scoring rubric with respect to the stated purpose.
This table is divided according to the type of evidence being considered.
Table 1: Questions to Examine Each Type of Validity Evidence
Content
1. Do the evaluation criteria1.
address any extraneous
content?
2. Do the evaluation criteria
of the scoring rubric
2.
address all aspects of the
intended content?
3. Is there any content
addressed in the task that
should be evaluated
through the rubric, but is
not?

Construct

Criterion

Are all of the important


1.
facets of the intended
construct evaluated through
the scoring criteria?
Is any of the evaluation
criteria irrelevant to the
2.
construct of interest?

How do the scoring criteria


reflect competencies that
would suggest success on
future or related
performances?
What are the important
components of the future or
related performance that may
be evaluated through the use
of the assessment instrument?
3. How do the scoring criteria
measure the important
components of the future or
related performance?
4. Are there any facets of the
future or related performance
that are not reflected in the
scoring criteria?
Another form of validity evidence that is often discussed is "consequential evidence".

Consequential evidence refers to examining the consequences or uses of the assessment


results. For example, a teacher may find that the application of the scoring rubric to the
evaluation of male and female performances on a given task consistently results in lower
evaluations for the male students. The interpretation of this result may be the male students
are not as proficient within the area that is being investigated as the female students. It is
possible that the identified difference is actually the result of a factor that is unrelated to the
purpose of the assessment. In other words, the completion of the task may require knowledge
of content or constructs that were not consistent with the original purposes. Consequential
evidence refers to examining the outcomes of an assessment and using these outcomes to
identify possible alternative interpretations of the assessment results (American Educational
Research Association, American Psychological Association & National Council on
Measurement in Education, 1999).

11

Lesson Plan
School

: Cendana Senior High School

Subject

I.

: English

Grade/Semester

: X/1

Skill

: Reading

Genre

: Recount

Test

: English Composition Essay test

Topic

: Holiday

Time allocation

: 40 minutes (1 meeting)

Competence standard
Express the meaning in written text and short essay of recount in the context of daily life.

11

II.

Basic Competence
Comprehend about the text and understand the meaning of the text.

III.

Indicators
Individually, students can use the generic structure and language feature of recount text to tell
their holiday.

IV.

Teaching-Learning Methods

Individual work

Pair/Group work

Online discussion

V.
No.

1.

Teaching and Learning Activities


Activity

Opening

Explanatio
n about the
generic
2.

structure

Students Activity
Students greet the teacher
Students respond teacher

Teacher greets the students.


Teacher asks students about their

questions about their holiday

holiday
Teacher introduces the materials

Students learn about the generic

Teacher gives an example of the

structure and language feature

recount text that tells about holiday


Teacher explains the function, generic

of recount text
Students tell their holiday each

Duratio
n (min)
15

30

Material

LCD
Projector,
Laptop,

structure and language feature of


recount text

and

other
Teacher and students analyze the

language

example together.

students and asks them to tell their

feature of

Teacher asks work in group of 3-4


holiday each other.
Teacher and students analyse the

recount text
Explanatio

Students learn how to use

example together.
Teacher explain what is Edmodo and

n about

Edmodo for learning


Students write a recount text

how to use Edmodo for learning


Teacher asks students to write a

computer

about their holiday individually


Students post their writing in

recount text about their holiday

connection

Edmodo and do online

individually
Teacher asks students to post their

for each

discussion there by giving

writing in Edmodo and give comments

comments

each other.

Edmodo ;
3.

Teachers Activity

individual
work and
online
discussion

35

Set of
and internet

Student.

11

5.

Closing

Students review the materials

Teacher does a reflection by

that they have learned by

summing up the days activity and

asking question or giving

evaluating the part the needs to be

opinion about the lesson.


Students greet the teacher.

improved.
Teacher re-explains about the topic

10

that have not mastered and ask


students to read the next chapter for

the next meeting


Teacher greets the students to end
the meeting.

VI.

VII.

Learning Media

Power Point presentation

Communicative facilitating e-tools: Edmodo.com


Evaluation
Scoring by using holistic rubric: The progress indicators in the scoring rubric have been
developed to help teachers understand and evaluate their students progress and achievement
in writing. Teachers are asked to make a best-fit judgement as to the level at which their
students writing most predominantly sits for each of the seven content areas: Audience
Awareness and Purpose, Content/Ideas, Structure/Organisation, Language Resources,
Grammar, Spelling, and Punctuation.
Deep Features:
Audience Awareness and Purpose:
The writer aims to inform or entertain a reader or listener by reconstructing a view of the
world that the reader can enter.
Recounts centre on the sequenced retelling of experience, whether real or imagined.
There are three common types of recount that have variations in focus.

Personal recounts involve the reconstruction of a personal experience that often


includes reflections on the writers feelings.

11

Factual recounts involve the recounting of events from an informational


perspective (A visit to McDonalds) and often include statements of observation
as asides to the recounting of events (The ice-cream machine behind the counter
is big and shiny. I saw people polishing it. It takes a lot of work to keep it that
shiny).

Imaginative recounts may involve the writer in recounting events from an


imagined perspective (A day in the life of a Viking raider) or recounting
imagined events from a personal perspective (A field trip to Mars) that may
include both imagined observation and comment.

Content/Ideas:

Recounts use a succinct orientating device early in the piece to introduce


characters, settings and events to be recounted (i.e., who, what, why, where, when,
how). A point of view, the perspective from which the recount is told, is often
established here.

Events are related in time order.

Comment or observation and/or reflection is used to foreground events or details of


significance to the writer. These may be interwoven with the retelling.

Optional is a re-orientation that is an ending statement often used to reflect or


comment on the events recounted or to predict future events (I had a great time at
Camp Hunua. I wonder what will happen to us next year!).

Structure/Organisation:

Recounts are organised around a sequenced account of events or happenings.

They follow a time sequence in that they are organised through time (i.e.,
conjunctions and adverbials show linkages in setting events in time and ordering
the events and the passage of time).

Language Resources:

Specific people, places, and events are named (On Saturday, our class had a
sleepover at Kelly Tarltons Underwater World in Auckland or Today, we raided
Lindisfarne Abbey to gather more gold for our longboat).

11

Detailed recounting makes extensive use of descriptive verbs, adverbs, adjectives,


and idiomatic language to catch and maintain reader interest.

There is frequent use of prepositional phrases, adverbials, and adjectivals to


contextualise the events that unfold.

Dialogue or direct speech is often used to give the recount a realistic feel, to
assist in the reconstruction of the events, or to provide opportunities to comment on
the happenings.

Many action verbs tell of happenings and of the behaviours of those involved.

Some relational verbs are used to tell how things are as the writer reflects, observes
or comments.

The choice and use of vocabulary often reflects the desire to create particular
images or feelings for the reader.

Verbs are commonly in the past tense, though tense can vary in the comments (On
Tuesday, Mary and I went to the shop. We are best friends.).

11

VIII.

Assessment
Performance Assessment: Writing a recount text by seeing the holistic rubric below.
E
Limited
Achievement

D
Basic
Achievement

C
Sound
Achievement

B
High
Achievement

A
Outstanding
Achievement

Producing Texts
Produces a wide range of
well-structured and welpresented literary and
factual texts for a wide
variety of purposes and
audiences using
increasingly challenging
topics, ideas, issues and
written language features.
Grammar and
Punctuation
Uses knowledge of
sentence structure,
grammar and punctuation
to edit own writing.

Poor or no structure.
Poorly presented. Limited
purposes. Unable to
redraft for different
audiences. Basic topics,
ideas, issues and language
used.

Limited range of texts.


Some structure. Basic
presentation. Some
drafting. Keeps to
known topics and
familiar issues and
language. Little editing.

A range of topics with


acceptable presentation
and structure.
Written for some
different audiences and
purposes. Starting to
accept challenges.
Editing evident.

A range of topics. Good


presentation and
structure. Accepts
challenging topics,
ideas, issues and
language features.
Edits well.

Writes well on a wide


range of topics with
excellent structure and
presentation for a variety
of purposes and
audiences. Can write on
challenging topics, ideas,
issues using sophisticated
language. Edited well

Many errors with


grammar and punctuation.
(Less than 50% of text
correct.) Text difficult to
read and understand.
Unable to edit own work.

Obvious errors with


grammar and
punctuation.(50% - 65%
of text correct.) Effort
needed to make sense of
the text. Some editing of
own work evident.

Some errors with


grammar and
punctuation.
65%-80% of text
correct.) Text quite
readable. Editing
evident.

Few or no errors with


grammar or punctuation.
(90%+ of text correct.)
Text very easy to read.
More complex sentence
structure. Edited well.

Spelling
Spells most common
words accurately and uses
a range of strategies to
spell unfamiliar words.

Many errors. (Less than


50% of text correct.)
Meaning compromised.
Few or no strategies used.
Text difficult to read.

Obvious errors. (50%


-65% of text correct.)
Guessing needed to
gauge meaning. Few
strategies used. Text a
little difficult to read.

Some errors.
(65%-80% of text
correct.) Meaning still
conveyed with some
guessing. Some
strategies used. Text
readable.

Few or some errors with


grammar or punctuation.
(80%-90% of text
correct.) Text easy to
read. More complex
sentence structure.
Strong evidence of
editing own work.
Few or some errors.
(80%-90% of text
correct) Meaning not
affected. Uses a variety
of strategies. Text easy
to read.

Recount

Few or no errors.
(90%+ of text correct.)
Errors minor and do not
affect meaning.
Uses a variety of
strategies. Text very easy
to read.

Handwriting and
Computer
Produces texts in a fluent
and legible style and uses
computer technology to
present these effectively
in a variety of ways.
Context and Text
Critically analyses own
texts in terms of how well
they have been written,
how effectively they
present the subject matter
and how they influence
the reader.
Structure
Critically evaluates how
own texts have been
structured to achieve their
purpose and discusses
ways of using related
grammatical features and
conventions of written
language to shape readers
and viewers
understanding of texts.

Handwriting very hard to


read, lacking many
elements (size, slope,
formation, spacing)
Very poor manipulation of
a keyboard. Text very
hard to read.
Is unable to analyse texts
regarding how well they
are written, presented and
how the texts influence
the reader.

Handwriting harder to
read, lacking elements
(size, slope, formation,
spacing)
Poor use of the
keyboard. Text harder to
read.
Is able to do basic
analysis of texts
regarding how well they
are written, presented
and how the texts
influence the reader.

Handwriting lacks some


elements. Clear but less
consistent. Adequate
manipulation of the
keyboard. Text readable.

Handwriting lacks a few


elements but still clear
and neat. Good
manipulation of the
keyboard. Text easy to
read.

Is able to analyse texts


regarding how well they
are written, presented
and how the texts
influence the reader.

Is well able to analyse


texts regarding how they
are written, presented
and how the texts
influence the reader.

Lacks understanding of
structure and purpose.
Unable to discuss use of
grammatical features and
conventions of written
language and how they
are used to shape
understanding.

Has little understanding


of structure and purpose.
Less able to discuss use
of grammatical features
and conventions of
written language and
how they are used to
shape understanding

Grasps understanding of
structure and purpose.
Some ability to discuss
use of grammatical
features and
conventions of written
language and how they
are used to shape
understanding

Good understanding of
structure and purpose.
Well able to discuss use
of grammatical features
and conventions of
written language and
how they are used to
shape understanding

Clear, neat handwriting


of consistent size and
slope correctly formed.
Can use a keyboard
correctly to produce
desired text. Text very
clear and easy to read.
Is very well able to
critically analyse how
well the texts are written,
presented, how effective
they are and how they
influence the reader.

Excellent understanding
of structure and purpose.
Adept at discussing use
of grammatical features
and conventions of
written language and
how they are used to
shape understanding

Name : Muthia Andini


Visiting Bali
There were so many places to see in Bali that my friend decided to join the tours to see as
much as possible. My friend stayed in Kuta on arrival. He spent the first three days swimming
and surfing on Kuta beach. He visited some tour agents and selected two tours. The first one was
to Singaraja, the second was to Ubud. On the day of the tour, he was ready.
My friend and his group drove on through mountains. Singaraja is a city of about 90
thousand people. It is a busy but quiet town. The street are lined with trees and there are many
old Dutch houses. Then they returned very late in the evening to Kuta.
The second tour to Ubud was a very different tour. It was not to see the scenery but to see
the art and the craft of the island. The first stop was at Batubulan, a center of stone sculpture.
There my friend watched young boys were carving away at big blocks of stone. The next stop
was Celuk, a cente for silversmiths and goldensmiths. After that he stopped a little while for
lunch at Sukawati and on to mass. Mass is a tourist center My friend ten-day-stay ended very
quickly beside his two tour, all his day was spent on the beach. He went sailing or surfboarding
every day. He was quiet satisfied.

Name : Ratna Setiawati


Pangandaran Beach
The tour to Pangandaran Beach started on holiday last semester. We decided to go to
Pangandaran Beach by our motorbike. That was very interesting tour. Riding a motorbike from
my hometown, Cirebon, to Pangandaran Beach with my best friends made me feel exited.
The tour to Pangandaran Beach began at 09.00 a.m. in the morning and it took 5 hours riding
to Pangandaran Beach. There were so many story that my friends and I got when we were in the
tour such as there was my friend who got lost, ran out of fuel in the middle of jungle, and so
forth. But it was interesting, because it was the first moment that I ever had in touring.
We arrived at Pangandaran Beach at 02.00 p.m. and we stright to move to the beach. At beach
we just lied down there to stretch our muscle because of 5 hours riding. We also had a lunch
there by eating some foods that we brought from Cirebon. That was very nice moment when we
shared our own food to others.
After we had enough rest, we began to explore Pangandaran Beach. Started by exploring the
beach, and the sea using rented boat. Then we went to dive by renting some diving equipment.
We could see many coral there. We just had 2 hours to enjoy Pangandaran Beach because we had
to come back to Cirebon.
We came back to Cirebon at 04.00 p.m. It was imposible to ride in the night, so we just
decided to stay over in our friend house in Ciamis and we started to come back in the morning.
That was very nice experience that I and my friends ever had. We would never forget that
moment.

Name : Rena Kumala Cahya Ningrum


My Holiday in Bali
When I was 2nd grade of senior high school, my friends and I went to Bali. We were
there for three days. I had many impressive experiences during the vacation. First day, we visited
Sanur Beach in the morning. We saw the beautiful sunrise together. It was a great scenery. Then,
we checked in to the hotel. After prepared ourselves, we went to Tanah Lot. We met so many
other tourists there. They were not only domestic but also foreign tourists. Second day, we
enjoyed the day on Tanjung Benoa beach. We played so many water sports such as banana boat,
jetsky, speedboat etc. We also went to Penyu island to see many unique animals. They were
turtles, snakes, and sea birds. We were very happy. In the afternoon, we went to Kuta Beach to
see the amazing sunset and enjoyed the beautiful wave. The last day, we spent our time in
Sangeh. We could enjoy the green and shady forest. There were so many monkies. They were so
tame but sometimes they could be naughty. We could make a close interaction with them. After
that, we went to Sukowati market for shopping. That was my lovely time. I bought some Bali TShirt and souvenirs. In the evening, we had to check out from the hotel. We went back home
bringing so many amazing memories of Bali.

Name : Zahara Atika


Visitting to the Zoo
Yesterday my family went to the zoo to see the elephant.
When we got to the zoo, we went to the shop to buy some food to give to the animals.
After getting the food we went to the nocturnal house where we saw birds and reptiles which
only come out at night.
Before lunch we went for a ride on the elephant. It was a thrill to ride it. Dad nearly fell off when
he let go of the rope.
During lunch we fed some birds in the park. In the afternoon we saw the animals being fed.
When we returned home we were tired but happy because we had so much fun.

Name : Fitriah
A Trip at the Beach
Last week my friend and I were bored after three weeks of holidays, so we rode our bikes to
cerocok Beach, which is only five kilometres from where I live.
When we arrived at the beach, we were surprised to see there was hardly anyone there. After
having a quick dip in the ocean, which was really cold, we realized one reason there were not
many people there. It was also quite windy.
After we bought some hot chips at the takeaway store nearby, we rode our bikes down the beach
for a while, on the hard, damp part of the sand. We had the wind behind us and, before we knew
it, we were many miles down the beach. Before we made the long trip back, we decided to
paddle our feet in the water for a while, and then sit down for a rest. While we were sitting on
the beach, just chatting, it suddenly dawned on us that all the way back, we would be riding into
the strong wind.
When we finally made it back home, we were both totally exhausted! But we learned some good
lessons that day.

CHAPTER III
A. Conclusion
This paper aimed to review empirical research and illuminate the questions of how the
use of rubrics can (1) enhance the reliability of scoring, (2) facilitate valid judgment of
performance assessments, and (3) give positive educational consequences, such as promoting
learning and/or improve instruction. A first conclusion is that the reliable scoring of performance
assessments can be enhanced by the use of rubrics. In relation to reliability issues, rubrics should
be analytic, topic-specific, and complemented with exemplars and/or rater training. Since
performance assessments are more or less open ended per definition, it is not always possible to
restrict the assessment format to achieve high levels of reliability without sacrificing the validity.
Another conclusion is that rubrics do not facilitate valid judgment of performance assessments
per se. Valid assessment could be facilitated by using a more comprehensive framework of
validity when validating the rubric, instead of focusing on only one or two aspects of validity. In
relation to learning and instruction, consequential validity is an aspect of validity that might need
further attention. Furthermore, it has been concluded that rubrics seem to have the potential of
promoting learning and/or improve instruction. The main reason for this potential lies in the fact
that rubrics make expectations and criteria explicit, which also facilitates feedback and selfassessment. It is thus argued that assessment quality criteria should emphasize dimensions like
transparency and fitness for self-assessment to a greater extent than is done through the
traditional reliability and validity criteria. This could be achieved through a framework of quality
criteria that acknowledges the importance of trustworthiness in assessment as well as supports a
more comprehensive view on validity issues (including educational consequences).

Vous aimerez peut-être aussi