Vous êtes sur la page 1sur 31

1) Test Types

a) Language Aptitude Tests c) Placement Tests e) Achievement Tests 2) Some Practical Steps to Test Construction a) Assessing Clear, Unambiguous Objectives b) Drawing Up Test Specifications b) Proficiency Tests d) Diagnostic Tests

c) Devising Test Tasks


d) Designing Multiple-Choice Test Items I.Design each item to measure a specific objective II.State both stem and options as simply and directly as possible III.Make certain that the intended answer is clearly the only correct one IV. Use item indices to accept, discard, or revise items 3) Scoring, Grading, and Giving Feedback a) Scoring b) Grading c) Giving Feedback

Before start: a) What is the purpose of the test? b) What are the objectives of the test? c) How will the test specifications reflect the purpose and the objectives? d) How will the test tasks be selected and the seperate items arranged? e) What kind of scoring, grading and/or feedback is expected?

TEST TYPES
1) 2) 3) 4) 5) Language aptitude test Proficiency test Placement test Diagnostic test Achievement test

1)Language aptitude test: Predicts capacity & general ability to learn It has limitation and flaw- prediction is not easy Two standardized tests in the US: MLAT & PLAB

Task in the modern language aptitude test (MLAT) 1. Number learning : Examinees must learn a set of numbers through aural input and then discriminate different combination of those numbers. 2. Phonetic script : Examinees must team a set of correspondences between speech sounds and phonetic symbols. 3. Spelling dues : Examinees must need words that are spelled some what phonetically 4. Word in sentence : Examinees are given a key word in a sentence and are then asked to select a word in second sentence that performs the same grammatical action as the key word. 5. Paired associates : Examinees must quickly team a set of vocabulary words from another language and memorize their English meaning.

2) Proficiency test:
Not limited to any one course, curriculum, or

single skill ( grammar- vocabulary- reading etc)


Recently writing and oral production

Measures overall ability


Traditionally composed of multiple choice items

Summative and norm-referenced


Provides sufficient results

No diagnostic feedback

3) Placement Test aims to place a student into a particular level or section of language curriculum or school has various types: comprehension and production questions, responding through written and oral performance, selection (multiple - choice) and gap-filling formats

4) Diagnostic Test Provides information on what to focus later Shows lacks, difficulties, misunderstandings etc

5) Achievement Tests Should be limited to particular material (content-time) needs to be related to classroom lessons, units, or a total curriculum supposed to be in accordance with the objectives

ASSESSING CLEAR, AMBIGIOUS OBJECTIVES


First step while designing a test Set clear and specific objectives Do they know or are they able to do? (based on the material)

DRAWING UP TEST SPECIFICATIONS


Test specifications for classroom use can be a simple and practical outline of your test. (a) a broad outline of the test (b) what skills you will test (c) what the items will look like.

ADVANTAGES OF DRAWING UP TEST SPECIFICATIONS

DEVISING TEST TASKS


Draft the questions Revise the draft Request aid from colleague

* imagine yourself as a student who write this test

While revising the draft: 1)Are the directions to each section clear? 2) Is there an example item for each section? 3) Does each item measure a specified objective? 4) Is each item stated in clear, simple language? 5) Are the multiple choice items, distracters appropriate? 6) Is the difficulty of each item appropriate? 7) Is the language authentic? 8) Does the test reflect the learning objectives?

DESIGNING MULTIPLE CHOICE TEST ITEMS receptive and selective, not productive a stem and several options or alternatives to choose from One of those options, the key, is the correct response, while the others serve as distractions.

Written properly, multiple choice exams correlate strongly with assessments by descriptive tests
Brown, Robert, Multiple Choice Versus Descriptive Tests Frontiers in Education Conference, Reno NC 2001.

Possible problems (Hughes, 2003)

Only recognition knowledge Guessing ??? Restricted Difficult to write Harmful washback Cheating ???

Number of Questions 1 2 4 6 10 20 50

Percent Pass (50%) by Chance


2 choices 3 choices 4 choices 5 choices

50 75 69 66 62 59 56

33 56 41 32 21 9.2 1

25 44 26 17 8 1.4 .01

20 36 18 10 3 .3 .0004

Multiple Choice vs. Descriptive Examinations


Descriptive Questions Easy
Difficult Long Varies

Multiple Choice
Difficult Easy Short High

Setting the exam Grading Task Grading Time Grade Consistency

1. Design each item to measure a specific objective


Example:

Where did George go after party last night ? a. Yes, he did b. Because he was tired c. To Elaines place for another party d. Around eleven oclock.
The specific objective being tested here is comprehension of wh-questions. Distractor (a) is designed to ascertain that the student knows the difference between an answer to a whyquestion and a yes/no question. Distractors (b) and (d), as well as the key item (c) test comprehension of the meaning of where as opposed to why and when. The objective has been directly addressed.

2. State both stem and options as simply and directly as possible


Example:
We went to the temples ___________ fascinating. a) which were beautiful b) which were especially c) which were holy which were is repeated in all three options. It should be placed into the stem.

3. Make certain that the intended answer is clearly the only correct one
Example:

Where did George go after party last night ? a. Yes, he did b. Because he was tired c. To Elaines place for another party d. He went home around eleven oclock. Distractor D seems to be acceptable as well, so it should be omitted.

4. Use item indices to accept, discard or revise items a) Item facility (easy or difficult?) b) Item discrimination (differentiating low-ability and highability students) c) Distractor efficiency (appropriate or not?)

Scoring, Grading, and Giving Feed Back


SCORING reflects the relative weight that you place on each section and items in each season.

Grading
The country, and context of this English
classroom, Institutional expectations (most of them unwritten), implicit definitions of grades that you have set

forth
Explicit and relationship you have established with this class The expectations that have been engendered in previous tests and quizzes in this class Student expectations

FEEDBACK Classroom test insider the multitude of options. You might choose to return the test to the student with one of or a combination of, any of the possibilities below. 1. A latter grade 2. A Total Score 3. Four Sub scores (speaking, listening, reading, writing) 4. For the listening and reading sections

a. An individuate of correct/incorrect responses


b. Marginal comments 5. For the oral interview

a. Scores for each element being rated


b. A checklist of areas needing work c. Oral feedback after the interview d. A post-interview conference to go over the results

6. On the essay a. Scores for each element being rated

b. A checklist of areas needing work


c. Marginal and end-of-essay comments, suggestions d. A post-test conference to go over work e. A self-assessment 7. On all or selected parts of the test, peer checking of results 8. a whole-class discussion of result of the test 9. Individual conferences with each student to review the whole test

Interpreting test scores

Teachers High scores = good instruction Low scores = poor students Students High scores = smart, well-prepared Low scores = poor teaching, bad test

Interpreting test scores


High scores too easy, only measured simple educational objectives, biased scoring, cheating, unintentional clues to right answers Low scores too hard, tricky questions, content not covered in class, grader bias, insufficient time to complete test

Vous aimerez peut-être aussi