Vous êtes sur la page 1sur 29

ESTABLISHING HIGH QUALITY CLASSROOM

ASSESSMENT

By: Marife S. Magpantay



Select Appropriate
Methods Fairness
Types of Methods Public targets and assessment
oObjective Opportunity to learn
oEssay
oPerformance-based
Prerequisites
oOral Avoid to become bias
oObservation Avoid Assessment Bias
oSelf-report

Match Method to the target

Positive
ESTABLISHIN Consequences
Clear and
G Students
Appropriate
HIGH
Targets Teachers
QUALITY
CLASSROOM
ASSESSMENT
Practicality and
Validity Efficiency
Reliability
Inferences, use, Familiarity with Method
consequences Error Time
Types of Evidence Estimating Complexity
Performance Ease of Scoring
Ease of Interpretation
Cost
APPROPRIATENESS OF ASSESSMENT METHODS
Matching Targets with Methods Scorecard

Assessment Methods

Objective Essay Performa Oral Observati Self-


s nce-Based Questions ons Report
Targets
Knowledge 5 4 3 4 3 2
Reasoning 2 5 4 4 2 2
Skills 1 3 5 2 5 3
Products 1 1 5 2 4 4
Affect 1 2 4 4 4 5
Different Methods of Assessment

Objective Objective Essay Performance- Oral Observatio Self-Report


Supply Selection Based Question n

Short Multiple Restricte Presentations Oral Informal Attitude


answer Choice d Papers Examination Formal Survey
Completion Matching response Projects s Sociometric
True or False Athletics Conferences devices
Demonstration Interviews Questionnaires
s Inventories
Exhibit
Validity

It is a characteristic that refers to the


appropriateness of the inferences, uses,
and consequences that result from the
test or other method of gathering
information.
The extent to which a test measures
what it is supposed to measure.
Sources of Information for Validity

Content The extent to which the assessment is


Related representative of the domain of interest.
Evidence
Criterion- The relationship between the assessment and
Related another measure of the same trait.
Evidence
Construct- The extent to which the assessment is a
Related meaningful measure of an unobservable trait or
Evidence characteristic.
Professional Judgments in Establishing
Content-related Evidence for Validity

Learning Content Instruction Assessment


Targets
What learning What content is What content and Are assessments
targets will be most important? learning targets adequate samples
assessed? How What topics will be have been of students
much of the assessed? How emphasized in performance in
assessment will be much of the instruction? each topic area
done on each assessment and each target?
target area? should be done in
each topic?
Criterion-related Evidence

Provides such validity by relating an


assessment to some other valued
measure (criterion) that either provides
an estimate of current performance
(Concurrent) or predicts future
performance (Predictive).
Example of Criterion-Related
evidence

If your assessment of a students skill in


using a microscope through observation
coincides with the students score on a
quiz that tests steps in using
microscopes, then you have criterion-
related evidence that your inference
about the skill of this student is valid.
Construct-related Evidence

As an unobservable trait or
characteristic that a person possesses,
such as intelligence, reading
comprehension, honesty, self concept,
attitude, reasoning ability, learning style
and anxiety.
Suggestions for Enhancing Validity

Ask others to judge the clarity of what you are assessing.
Check to see if different ways of assessing the same thing give
the same result.
Sample a sufficient number of examples of what is being
assessed.
Prepare a detailed table of specifications.
Ask others to judge the match between the assessment items
and objective of the assessment.
Compare groups known to differ on what is being assessed.
Compare scores taken before to those taken after instruction.
Compare predicted consequences to actual consequences.
Compare scores on similar but different traits.
Provide adequate time to complete the assessment.
Ensure appropriate vocabulary, sentence structure and item
difficulty.
Ask easy questions first.
Use different methods to assess the same thing.
Reliability
Reliability

It is concerned with the consistency,


stability, and dependability of the
results.
Student Addition Subtraction
Quiz 1 Quiz 2 Quiz 1 Quiz 2

Rob 18 16 13 20
Carrie 10 12 18 10
Ryann 9 8 8 14
Felix 16 15 17 12
Sources of Error in Assessment

Internal Error
Health
Mood
Motivation
Test-Taking skills
Anxiety
Fatigue
General Ability

Actual or True
Knowledge, Assessme Observed
Reasoning, nt Score
Skills,
Products, or
Affect
External Error
Directions
Luck
Item ambiguity
Heat in room, lighting
Sampling of items
Observer differences
Test interruptions
Scoring
Observer bias
Suggestions for Enhancing Reliability

Use of sufficient number of items or tasks. (Other things


being equal, longer tests are more reliable.)
Use of independent raters or observers who provide
similar scores to the same performances.
Construct items and tasks that clearly differentiate
students on what is being assessed.
Make sure the assessment procedures and scoring are as
objective as possible.
Continue assessment until results are consistent.
Eliminate or reduce the influence of extraneous events or
factors.
Use shorter assessments more frequently than fewer long
assessments.
Fairness
Fairness
A fair assessment is one that provides
all students with an equal opportunity to
demonstrate achievement.
Key components of Fairness

Student knowledge of learning targets


and assessments
Opportunity to learn.
Prerequisite knowledge and skills
Avoiding teacher stereotypes.
Avoiding bias in assessment tasks and
procedures.
Student knowledge of learning targets and
assessments

How often have you taken a test and


thought, Had I only known the teacher
was going to test this content, I would
have studied it?
Opportunity to learn.

It means that students know what to


learn and then are provided ample time
and appropriate instruction.
Prerequisite Knowledge and
Skills
Example: Math Reasoning Skills
Reading skills are prerequisites.
Avoiding teacher stereotypes.

Stereotypes are judgments about how


groups of people will behave based on
characteristics such as gender, race,
socioeconomic status, physical
appearance and other characteristics.
Avoid labeling your students with words
such as shy, gifted, smart, poor,
learning, disabled, leader and at-risk.
Avoiding bias in assessment tasks and procedures.

Bias is present if the assessment


distorts performance due to the
students ethnicity, gender, race,
religious background, and so on.
Positive
Consequence
s
Students
Teachers
Practicality and
Efficiency
Familiarity with Method
Time
Complexity
Ease of Scoring
Ease of Interpretation
Cost
Exercises
1. Indicate if each of the following
statements is correct or incorrect. Explain
why.
a. A test can be reliable even without
validity.
b. A valid test is reliable.

2. Why is it important for teachers to


consider practicality and efficiency in
selecting their assessments, as well as
more technical aspects like validity and
reliability?

Vous aimerez peut-être aussi