Vous êtes sur la page 1sur 11

Conventional Types of Assessment Tools Multiple-Choice Tests o Multiple-choice tests are popular tests that require students to recognize

correct answers from among several choices, usually three to four with all but one choice wrong. They're easy to score, but not as easy to create because answer choices must carefully balance one correct answer, one close-to-correct answer, with the remaining wrong. Some students consider multiple-choice tests easier than essay tests, and others consider them to be a greater challenge. If a student is fairly good at strategy, he or she is likely to be successful with multiple-choice tests, since points can be scored with a close guess through a process of elimination. But because the answers are somewhat simpler to determine, they cause for a much broader knowledge-base and this makes them more challenging to prepare for. Some students can compensate for a lack of this broader knowledge-base through good test-taking strategy. Short-Answer Essay Tests
o

Educators design short-answer essay tests to evaluate what can't be articulated through multiple-choice questions. The tests generally require a deeper, more detailed analysis of content that requires higher-order thinking. For this reason, many students find essay responses to be more of a challenge. Question prompts are used to elicit students responses, and can often involve application of complicated concepts, synthesis and problem-solving through the making of comparisons, identification of similarities and differences, and cause and effect relationships. Questions typically use language like "explain," "how would" "describe" and "assess."

Constructed-Response Tests
o

Constructed-response tests require short answer or fill-in-the-blank questions, and require a blend of factual knowledge and higher-order reasoning. Students place their own information in missing spaces rather than from among several pre-prepared choices. They're much easier to create than multiple choice, and can retain control over guessing. For this reason they're more difficult to score, and usually require manual scoring with each response read and evaluated on its merits. When used as part of a comprehensive standardized test, they tend not to be weighted as heavily as other questions for ease of scoring.

Standardized Tests
o

Schools use standardized tests widely on a national level, and they are part of every school district's accountability design. Many are considered "high stakes" because they're taken by large populations of students, and if they do not perform well, districts could lose valuable federal and state funding. Standardized tests need to be easily scored for this reason, and are therefore designed using a typical combination of multiple-choice, short answer, document-based questions and constructed responses.

What does Authentic Assessment look like?

An authentic assessment usually includes a task for students to perform and a rubric by which their performance on the task will be evaluated. Click the following links to see many examples of authentic tasks and rubrics.

Examples from teachers in my Authentic Assessment course

How is Authentic Assessment similar to/different from Traditional Assessment?


The following comparison is somewhat simplistic, but I hope it illuminates the different assumptions of the two approaches to assessment. Traditional Assessment By "traditional assessment" (TA) I am referring to the forced-choice measures of multiple-choice tests, fill-in-theblanks, true-false, matching and the like that have been and remain so common in education. Students typically select an answer or recall information to complete the assessment. These tests may be standardized or teachercreated. They may be administered locally or statewide, or internationally. Behind traditional and authentic assessments is a belief that the primary mission of schools is to help develop productive citizens. That is the essence of most mission statements I have read. From this common beginning, the two perspectives on assessment diverge. Essentially, TA is grounded in educational philosophy that adopts the following reasoning and practice: 1. A school's mission is to develop productive citizens. 2. To be a productive citizen an individual must possess a certain body of knowledge and skills. 3. Therefore, schools must teach this body of knowledge and skills. 4. To determine if it is successful, the school must then test students to see if they acquired the knowledge and skills. In the TA model, the curriculum drives assessment. "The" body of knowledge is determined first. That knowledge becomes the curriculum that is delivered. Subsequently, the assessments are developed and administered to determine if acquisition of the curriculum occurred. Authentic Assessment In contrast, authentic assessment (AA) springs from the following reasoning and practice: 1. A school's mission is to develop productive citizens. 2. To be a productive citizen, an individual must be capable of performing meaningful tasks in the real world. 3. Therefore, schools must help students become proficient at performing the tasks they will encounter when they graduate. 4. To determine if it is successful, the school must then ask students to perform meaningful tasks that replicate real world challenges to see if students are capable of doing so. Thus, in AA, assessment drives the curriculum. That is, teachers first determine the tasks that students will perform to demonstrate their mastery, and then a curriculum is developed that will enable students to perform those tasks well, which would include the acquisition of essential knowledge and skills. This has been referred to as planning backwards (e.g., McDonald, 1992). If I were a golf instructor and I taught the skills required to perform well, I would not assess my students' performance by giving them a multiple choice test. I would put them out on the golf course and ask them to perform. Although this is obvious with athletic skills, it is also true for academic subjects. We can teach students how to do math, do history and do science, not just know them. Then, to assess what our students had learned, we

can ask students to perform tasks that "replicate the challenges" faced by those using mathematics, doing history or conducting scientific investigation.
Authentic Assessment Complements Traditional Assessment

But a teacher does not have to choose between AA and TA. It is likely that some mix of the two will best meet your needs. To use a silly example, if I had to choose a chauffeur from between someone who passed the driving portion of the driver's license test but failed the written portion or someone who failed the driving portion and passed the written portion, I would choose the driver who most directly demonstrated the ability to drive, that is, the one who passed the driving portion of the test. However, I would prefer a driver who passed both portions. I would feel more comfortable knowing that my chauffeur had a good knowledge base about driving (which might best be assessed in a traditional manner) and was able to apply that knowledge in a real context (which could be demonstrated through an authentic assessment).
Defining Attributes of Traditional and Authentic Assessment

Another way that AA is commonly distinguished from TA is in terms of its defining attributes. Of course, TA's as well as AA's vary considerably in the forms they take. But, typically, along the continuums of attributes listed below, TA's fall more towards the left end of each continuum and AA's fall more towards the right end.

Traditional --------------------------------------------- Authentic Selecting a Response ------------------------------------ Performing a Task Contrived --------------------------------------------------------------- Real-life Recall/Recognition ------------------------------- Construction/Application Teacher-structured ------------------------------------- Student-structured Indirect Evidence -------------------------------------------- Direct Evidence ANALYTIC RUBRICS/SCALES GROUP PERFORMANCE RATING SCALE Directions: Use this form to give feedback about the performance in your group. Circle the appropriate number after each statement. 0 = Major Difficulty 1 = Needs Improvement 2 = Okay 3 = Very Good 4 = Excellent 1. All members participated in the group activities. 2. Members listened to others in the group. 3. Members helped and encouraged others in the group. 0 0 0 1 1 1 2 2 2 3 3 3 4 4 4

4. Group members stayed on the task assigned. 5. Group members worked well together. 6. No one dominated the group discussions. 7. Group members practiced the cooperative skills. 8. Group members did not use put-downs. 9. Group members were able to accept criticism. 10. Trust developed among group members.

0 0 0 0 0 0 0

1 1 1 1 1 1 1

2 2 2 2 2 2 2

3 3 3 3 3 3 3

4 4 4 4 4 4 4

Add all circled numbers for Total Score___________(out of 40)

ANALYTIC RUBRICS/SCALES SIX-TRAIT ANALYTIC WRITING RUBRIC Teachers in Alaska say they value these traits in writing: ideas and content, organization, word choice, voice, sentence fluency, and conventions. The Alaska analytic rubric allows the evaluator to compare the writer's achievement of each trait against a standard. A piece that shows strong control of a trait would receive a score of 5 for that trait. Less skillful use of the trait might earn a 3. A paper showing very little ability to use the trait would receive a 1. It is common for a piece of writing to exhibit a range of scores in different traits. For example, an essay might have a strong voice, but little mastery of writing conventions. In fact, one paper from the Alaska Statewide Direct Writing Assessment received these scores: Ideas and Content 4 Organization 3 Word Choice 5 Voice 5 Sentence Fluency 4 Conventions 2

The chart below describes strong abilities in each trait. For complete charts of standards for achievement levels 1 through 5 for each trait, see the Reference Kit. Ideas and Content Interesting Well focused Clear Detailed, complete, rich Written from experience Precise information Organization Good intro Good placement of details Strong transitions Smooth, easy pace Reader doesn't have to think about organization Strong conclusion Starts somewhere, goes somewhere Word Choice Precise language Strong verbs Specific, concrete nouns Natural Words used in new ways Strong imagery

Voice Individual Honest Natural Expressive Unusual, unexpected Appealing Written to be read and enjoyed

Builds in tension, creates interest Sentence Fluency Fluid Musical, poetic in sound Easy to read aloud Interesting word patterns Good phrasing Varied sentence structure Varied sentence beginnings Fragments used well

Conventions Correct or phonetic spelling Punctuation works with sentence structure Some sophisticated punctuation attempted Correct grammar Sound usage Paragraphing enhances organization Informalities in punctuation or usage handled well Attention to details (i.e., dotted i's, crossed t's) Effective title Good margins Easy to read

PEER EVALUATION RUBRIC FOR ORAL PRESENTATION Very Good 3 Gave an interesting introduction Presented clear explanation of topic Presented information in acceptable order Used complete sentences Offered a concluding summary Spoke clearly, correctly, distinctly, and confidently Maintained eye contact Maintained acceptable posture Presentation was interesting Used visual/audio aids well Handled questions and comments from the class very well Satisfactory 2 Poor 1

Total of 33 ___________

Assessment Tools
Below are links to assessment tools and techniques along with specific geoscience examples and resources.
Carol Ormand's toolbox, complete with rock hammer. Photo courtesy of Carol Ormand. Concept Maps - A diagramming technique for assessing how well students see the "big picture". ConcepTests - Conceptual multiple-choice questions that are useful in large classes. Knowledge Survey - Students answer whether they could answer a survey of course content questions. Exams - Find tips on how to make exams better assessment instruments. Oral Presentations - Tips for evaluating student presentations. Poster Presentations -Tips for evaluating poster presentations. Peer Review - Having students assess themselves and each other. Portfolios - A collection of evidence to demonstrate mastery of a given set of concepts. Rubrics - A set of evaluation criteria based on learning goals and student performance. Written Reports - Tips for assessing written reports. Other Assessment Types Includes concept sketches, case studies, seminar-style courses, mathematical thinking and performance assessments. Assessment tools

Student Reading Logs

Reading logs are an essential tool to use as a way to track each student's reading volume and stamina as well as a student's reading interests (e.g., genre, author, theme and topic). Since reading volume and stamina, along with reading appropriate books, reading with fluency and reading with critical literacy are directly linked to reading proficiency it is important that students keep track of their daily reading (Allington, 2001). Reading logs vary in format and differ across the grade. To assess a student's reading stamina and volume it is important that the reading log documents the number of pages a student read, the level of the text and the number of minutes the student read. Emergent readers can be assessed for reading stamina and volume as well. Kindergarten and first grade students, for example, can use a simplified reading log to keep track of their reading volume, such as a tally mark for each book read and reread.

Individual Reading Conferences

Reading conferences are a time for teachers to meet with students individually and converse with them about their reading. A conference might include discussing book selection, talking about the storyline or topic, listening to the student read aloud a small portion of the text, extending fix-up strategies, or setting

goals. Artifacts such as reading logs, reading notebooks, post its and running records may be used during a conference to determine the strategies the student is demonstrating and identify those strategies for targeted instruction. Usually teachers meet with students individually once a week in addition to working with them in guided reading, strategy groups, reading partnerships and book clubs. Teachers often meet more frequently with struggling students.

Written Test

Written tests are a classic form of assessment still used heavily in many elementary schools. Once students develop their rudimentary reading and writing skills, teachers can present them with written tests on which students are asked to respond to queries relating to the topic they are studying.

In-Class Assignment

Teachers often review their students' finished assignments to determine their understanding and make modifications to future lessons on the same topic. By reviewing worksheets or book work that students have completed, teachers can identify areas of weakness and strength and create plans for future instruction.

Alternative/Performance-Based Assessment

(See Links Below) There are generally two kinds of data used in educational assessment or evaluation, quantitative and qualitative. A quantitative measurement uses values from an instrument based on a standardized system that intentionally limits data collection to a selected or predetermined set of possible responses. Qualitative measurement is more concerned with detailed descriptions of situations or performance, hence it can be much more subjective but can also be much more valuable in the hands of an experienced teacher. Tasks used in performance-based assessment include essays, oral presentations, open-ended problems, hands-on problems, real-world simulations and other authentic tasks. Such tasks are concerned with problem solving and understanding. Just like standardized achievement tests, some performance-based assessments also have norms, but the approach and philosophy are much different than traditional standardized tests. The underlying concept is that the student should produce evidence of accomplishment of curriculum goals which can be maintained for later use as a collection of evidence to demonstrate achievement, and perhaps also the teacher's efforts to educate the child. Performance-based assessment is sometimes characterized as assessing real life, with students assuming responsibility for self-evaluation. Testing is "done" to a student, while performance assessment is done by the student as a form of self-reflection and self-assessment. The overriding philosophy of performance-based assessment is that teachers should have access to information that can provide ways to improve achievement, demonstrate exactly what a student does or does not understand, relate learning experiences to instruction, and combine assessment with teaching. In broad terms, there are three types of performance-based assessment: performances, portfolios, and projects. The determination of differences among performance, portfolio, and projects can be rather loosely interpreted, but the differences are distinct enough to permit separate classification among the different categories. Material can be collected as actual products or video and computer archives. Examples of school tasks that may be included in performance-based assessment are:

Art work Cartoons Collections Designs and drawings Documentary reports Experiments

Inventions Internet transmissions Journals Letters Maps Model construction

Notebooks Oral reports Original plays, stories, dances Pantomimes Performance, musical instrument Poetry recitations

Problems solved Puppet shows Reading selection Recipes Scale models Story illustrations

Foreign language activities Musical compositions Photos Games Musical scores Plans for inventions

Story boards Performances

Assessment strategies
ConcepTests are conceptual multiple-choice questions used during class that provide immediate assessment of student understanding. More than 300 ConcepTest questions are available on the Starting Point website. Using electronic response systems provides the instructor with immediate feedback about the distribution of answers in the class. Learn more about assessment and ConcepTests here. Minute papers are one type of classroom assessment technique that will give you an indication of student understanding of a particular topic. A one-minute paper can be used at the end of the class by asking students to write on one of the following questions.

What was the most important thing you learned in today's class? What question do you have about today's class? What was the muddiest point of today's class?

Students write their answers on index cards or slips of paper that are turned in at the end of class and can be graded or not. learn more here (more info)

Problem sets can be a useful way to give students practice in solving problems, doing quantitative work outside class time, and practice specific techniques. Problem sets are standard in many science courses and can be an effective assessment strategy in entry-level as well as upper-level courses. Labs can provide another way to assess student learning. The type of assessment might be a lab report, completion of the lab handout, a research project write-up, or some other assigment. Concept maps can also be used for assessment. Learn more about assessment using concept maps. Exams and quizzes are commonly used to assess student learning. They also force students to process information and help prevent students from disengaging in a course. Students need to process information in one way or another to learn. In studying for exams, students read, memborize, organize information, test themselves with questions, and with vary ing degrees of success, process the material for that particular section of the course. Processing inforamtion in a blitz of studying before each exam is not the ideal way to learn material, nor in many courses is it the only way students learn material. Studying before exams is, however, one of the most common ways in which students learn in a course. Exams can include mutiple choice questions, short answers, essay questions, questions about graphs or diagrams, and so forth. If you choose to use exams, it's a good idea to ask yourself how much of the exam requires students to use higher order thinking skills and how much of it requires lower order thinking skills and whether you are satisfied by your answer in light of the goals of your course.

Authentic Assessment / Alternative This approach attempts to connect assessment with the real world. It requires students to apply skills and knowledge to the creation of a product or performance that applies to situations outside the school environment. Biology teachers may assess students' understanding of the scientific process and collaboration by having students take part in an annual Audubon Society collection and analysis of local songbird populations.

What Is Alternative Assessment? The term alternative assessment is broadly defined as any assessment method that is an alternative to traditional paper-and-pencil tests. Alternative assessment requires students to demonstrate the skills and knowledge that cannot be assessed using a timed multiplechoice or true-false test. It seeks to reveal students' critical-thinking and evaluation skills by asking students to complete open-ended tasks that often take more than one class period to complete. While fact-based knowledge is still a component of the learning that is assessed, its measurement is not the sole purpose of the assessment. Alternative assessment is almost always teacher-created and is inextricably tied to the curriculum studied in class. The form of assessment is usually customized to the students and to the subject matter itself.

Bloom's Revised Taxonomy


Lorin Anderson, a former student of Bloom, revisited the cognitive domain in the learning taxonomy in the midnineties and made some changes, with perhaps the two most prominent ones being, 1) changing the names in the six categories from noun to verb forms, and 2) slightly rearranging them (Pohl, 2000). This new taxonomy reflects a more active form of thinking and is perhaps more accurate:

Category

Example and Key Words (verbs)


Examples: Recite a policy. Quote prices from memory to a customer. Knows the safety rules.

Remembering: Recall previous learned information.

Key Words: defines, describes, identifies, knows, labels, lists, matches, names, outlines, recalls, recognizes, reproduces, selects, states. Examples: Rewrites the principles of test

Understanding: Comprehending

the meaning, translation, interpolation, and interpretation of instructions and problems. State a problem in one's own words.

writing. Explain in one's own words the steps for performing a complex task. Translates an equation into a computer spreadsheet. Key Words: comprehends, converts, defends, distinguishes, estimates, explains, extends, generalizes, gives an example, infers, interprets, paraphrases, predicts, rewrites, summarizes, translates. Examples: Use a manual to calculate an employee's vacation time. Apply laws of statistics to evaluate the reliability of a written test. Key Words: applies, changes, computes, constructs, demonstrates, discovers, manipulates, modifies, operates, predicts, prepares, produces, relates, shows, solves, uses. Examples: Troubleshoot a piece of equipment by using logical deduction. Recognize logical fallacies in reasoning. Gathers information from a department and selects the required tasks for training. Key Words: analyzes, breaks down, compares, contrasts, diagrams, deconstructs, differentiates, discriminates, distinguishes, identifies, illustrates, infers, outlines, relates, selects, separates. Examples: Select the most effective solution. Hire the most qualified candidate. Explain and justify a new budget.

Applying: Use a concept in a new situation or unprompted use of an abstraction. Applies what was learned in the classroom into novel situations in the work place.

Analyzing: Separates material or concepts into component parts so that its organizational structure may be understood. Distinguishes between facts and inferences.

Evaluating: Make judgments about the value of ideas or materials.

Key Words: appraises, compares, concludes, contrasts, criticizes, critiques, defends, describes, discriminates, evaluates, explains, interprets, justifies, relates, summarizes, supports. Examples: Write a company operations or process manual. Design a machine to perform a specific task. Integrates training from several sources to solve a problem. Revises and process to improve the outcome. Key Words: categorizes, combines, compiles, composes, creates, devises, designs, explains,

Creating: Builds a structure or pattern from diverse elements. Put parts together to form a whole, with emphasis on creating a new meaning or structure.

generates, modifies, organizes, plans, rearranges, reconstructs, relates, reorganizes, revises, rewrites, summarizes, tells, writes.

Vous aimerez peut-être aussi