Vous êtes sur la page 1sur 10

Evaluation is the systematic collection and analysis of data needed to make decisions, a process in which most well-run programs

engage from the outset. Here are just some of the evaluation activities that are already likely to be incorporated into many programs or that can be added easily:

Pinpointing the services needed for example, finding out what knowledge, skills, attitudes, or behaviors a program should address

Establishing program objectives and deciding the particular evidence (such as the specific knowledge, attitudes, or behavior) that will demonstrate that the objectives have been met. A key to successful evaluation is a set of clear, measurable, and realistic program objectives. If objectives are unrealistically optimistic or are not measurable, the program may not be able to demonstrate that it has been successful even if it has done a good job

Developing or selecting from among alternative program approaches for example, trying different curricula or policies and determining which ones best achieve the goals

Tracking program objectives for example, setting up a system that shows who gets services, how much service is delivered, how participants rate the services they receive, and which approaches are most readily adopted by staff

Trying out and assessing new program designs determining the extent to which a particular approach is being implemented faithfully by school or agency personnel or the extent to which it attracts or retains participants.

Through these types of activities, those who provide or administer services determine what to offer and how well they are offering those services. In addition, evaluation in education can identify program effects, helping staff and others to find out whether their programs have an impact on participants' knowledge or attitudes. The different dimensions of evaluation have formal names: process, outcome, and impact evaluation.

Rossi and Freeman (1993) define evaluation as "the systematic application of social research procedures for assessing the conceptualization, design, implementation, and utility of ... programs." There are many other similar definitions and explanations of "what evaluation is" in the literature. Our

view is that, although each definition, and in fact, each evaluation is slightly different, there are several different steps that are usually followed in any evaluation. It is these steps which guide the questions organizing this handbook. An overview of the steps of a "typical" evaluation follows.

Process Evaluations Process Evaluations describe and assess program materials and activities. Examination of materials is likely to occur while programs are being developed, as a check on the appropriateness of the approach and procedures that will be used in the program. For example, program staff might systematically review the units in a curriculum to determine whether they adequately address all of the behaviors the program seeks to influence. A program administrator might observe teachers using the program and write a descriptive account of how students respond, then provide feedback to

instructors. Examining the implementation of program activities is an important form of process evaluation. Implementation analysis documents what actually transpires in a program and how closely it resembles the program's goals. Establishing the extent and nature of program implementation is also an important first step in studying program outcomes;that is, it describes the interventions to which any findings about outcomes may be attributed. Outcome evaluation assesses program achievements and effects.

Outcome Evaluations Outcome Evaluations study the immediate or direct effects of the program on participants. For example, when a 10-session program aimed at teaching refusal skills is completed, can the participants demonstrate the skills successfully? This type of evaluation is not unlike what happens when a teacher administers a test before and after a unit to make sure the students have learned the material. The scope of an outcome evaluation can extend beyond knowledge or attitudes, however, to examine the immediate behavioral effects of programs. Impact Evaluations Impact Evaluations look beyond the immediate results of policies, instruction, or services to identify longer-term as well as unintended program effects. It may also examine what happens when several programs operate in unison. For example, an impact evaluation might examine whether a program's immediate positive effects on behavior were sustained over time. Some school districts and community agencies may limit their inquiry to process evaluation. Others may have the interest and the resources to pursue an examination of whether their activities are affecting participants and others in a positive manner (outcome or impact evaluation). The choices should be made based upon local needs, resources, and requirements.

Regardless of the kind of evaluation, all evaluations use data collected in a systematic manner. These data may be quantitative such as counts of program participants, amounts of counseling or other services received, or incidence of a specific behavior. They also may be qualitative such as descriptions of what transpired at a series of counseling sessions or an expert's best judgment of the age-appropriateness of a skills training curriculum. Successful evaluations often blend quantitative and qualitative data collection. The choice of which to use should be made with an understanding that there is usually more than one way to answer any given question.

Why Do Evaluation? Evaluations serve many purposes. Before assessing a program, it is critical to consider who is most likely to need and use the information that will be obtained and for what purposes. Listed below are some of the most common reasons to conduct evaluations. These reasons cut across the three types of evaluation just mentioned. The degree to which the perspectives of the most important potential users are incorporated into an evaluation design will determine the usefulness of the effort.

Evaluation for Project Management Administrators are often most interested in keeping track of program activities and documenting the nature and extent of service delivery. The type of information they seek to collect might be called a "management information system" (MIS). An evaluation for project management monitors the routines of program operations. It can provide program staff or administrators with information on such items as participant characteristics, program activities, allocation of staff resources, or program costs. Analyzing information of this type (a kind of process evaluation) can help program staff to make short-term corrections ensuring, for example, that planned program activities are conducted in a timely manner. This analysis can also help staff to plan future program direction such as determining resource needs for the coming school year.

Operations data are important for responding to information requests from constituents, such as funding agencies, school boards, boards of directors, or community leaders. Also, descriptive program data are one of the bases upon which assessments of program outcome are built it does not make sense to conduct an outcome study if results can not be connected to specific program activities. An MIS also can keep track of students when the program ends to make future follow-up possible.

Evaluation for Staying on Track Evaluation can help to ensure that project activities continue to reflect project plans and goals. Data collection for project management may be similar to data collection for staying on track, but more information might also be needed. An MIS could indicate how many students participated in a prevention club meeting, but additional information would be needed to reveal why participants

attended, what occurred at the meeting, how useful participants found the session, or what changes the club leader would recommend. This type of evaluation can help to strengthen service delivery and to maintain the connection between program goals, objectives, and services.

Evaluation for Project Efficiency Evaluation can help to streamline service delivery or to enhance coordination among various program components, lowering the cost of service. Increased efficiency can enable a program to serve more people, offer more services, or target services to those whose needs are greatest. Evaluation for program efficiency might focus on identifying the areas in which a program is most successful in order to capitalize upon them. It might also identify weaknesses or duplication in order to make improvements, eliminate some services, or refer participants to services elsewhere. Evaluations of both program process and program outcomes are used to determine efficiency.

Evaluation for Project Accountability When it comes to evaluation for accountability, the users of the evaluation results likely will come from outside of program operations: parent groups, funding agencies, elected officials, or other policymakers. Be it a process or an outcome evaluation, the methods used in accountability evaluation must be scientifically defensible, and able to stand up to greater scrutiny than methods used in evaluations that are intended primarily for "in-house" use. Yet even sophisticated evaluations must present results in ways that are understandable to lay audiences, because outside officials are not likely to be evaluation specialists.

Evaluation for Program Development and Dissemination Evaluating new approaches is very important to program development in any field. Developers of new programs need to conduct methodical evaluations of their efforts before making claims to potential users. Rigorous evaluation of longer-term program outcomes is a prerequisite to asserting that a new model is effective. School districts or community agencies that seek to disseminate their approaches to other potential users may wish to consult an evaluation specialist, perhaps a professor from a local university, in conducting this kind of evaluation.

Three Levels of Evaluation[2] Project-Level Evaluation Project-level evaluation is the evaluation that project directors are responsible for locally.The project director, with appropriate staff and with input from board members and other relevant stakeholders, determines the critical evaluation questions, decides whether to use an internal evaluator or hire an external consultant, and conducts and guides the project-level evaluation.The Foundation provides assistance as needed. The primary goal of project-level evaluation is to improve and strengthen Kellogg-funded projects.

Ultimately, project-level evaluation can be defined as the consistent, ongoing collection and analysis of information for use in decision making. Consistent Collection of Information

If the answers to your questions are to be reliable and believable to your projects stakeholders, the evaluation must collect information in a consistent and thoughtful way.This collection of information can involve individual interviews, written surveys, focus groups, observation, or numerical information such as the number of participants.While the methods used to collect information can and should vary from project to project, the consistent collection of information means having thought through what information you need, and having developed a system for collecting and analyzing this information.

The key to collecting data is to collect it from multiple sources and perspectives, and to use a variety of methods for collecting information.The best evaluations engage an evaluation team to analyze, interpret, and build consensus on the meaning of the data, and to reduce the likelihood of wrong or invalid interpretations.

Use in Decision Making Since there is no single, best approach to evaluation which can be used in all situations, it is important to decide the purpose of the evaluation, the questions you want to answer, and which methods will give you usable information that you can trust. Even if you decide to hire an external

consultant to assist with the evaluation, you, your staff, and relevant stakeholders should play an active role in addressing these questions.You know the project best, and ultimately you know what you need. In addition, because you are one of the primary users of evaluation information, and because the quality of your decisions depends on good information, it is better to have negative information you can trust than positive information in which you have little faith. Again, the purpose of projectlevel evaluation is not just to prove, but also to improve.

People who manage innovative projects have enough to do without trying to collect information that cannot be used by someone with a stake in the project. By determining who will use the information you collect, what information they are likely to want, and how they are going to use it, you can decide what questions need to be answered through your evaluation.

Project-level evaluation should not be a stand-alone activity, nor should it occur only at the end of a program. Project staff should think about how evaluation can become an integrated part of the project, providing important information about program management and service delivery decisions. Evaluation should be ongoing and occur at every phase of a projects development, from preplanning to start-up to implementation and even to expansion or replication phases. For each of these phases, the most relevant questions to ask and the evaluation activities may differ.What remains the same, however, is that evaluation assists project staff, and community partners make effective decisions to continuously strengthen and improve the initiative.

How do YOU define assessment? The term assessment may be defined in multiple ways by different individuals or institutions, perhaps with different goals. Here is a sampling of the definitions you will see: Mirriam-Webster Dictionary Definition of Assessment: The action or an instance of assessing, appraisal Definition of assess: 1: to determine the rate or amount of (as a tax)2 a: to impose (as a tax) according to an established rate b: to subject to a tax, charge, or levy3: to make an official valuation of (property) for the purposes of taxation4: to determine the importance, size, or value of <assess a problem>5: to charge (a player or team) with a foul or penalty

Palomba, C.A. & Banta, T.W. Assessment Essentials: Planning, Implementing, and Improving Assessment in Higher Education. San Francisco: Jossey-Bass, 1999, p. 4 "Assessment is the systematic collection, review, and use of information about educational programs undertaken for the purpose of improving learning and development. The Higher Learning Commission defines assessment of student learning in the following way: Assessment of student learning is a participatory, iterative process that: Provides data/information you need on your students learning Engages you and others in analyzing and using this data/information to confirm and improve teaching and learning Produces evidence that students are learning the outcomes you intended Guides you in making educational and institutional improvements Evaluates whether changes made improve/impact student learning, and documents the learning and your efforts.

Formative and Summative Assessment Assessment can be done at various times throughout a program and a comprehensive assessment plan will include formative and summative assessment. The point at which the assessment occurs in a program distinguishes these two categories of assessment. Formative Assessment Formative assessment is often done at the beginning or during a program, thus providing the opportunity for immediate evidence for student learning in a particular course or at a particular point in a program. Classroom assessment is one of the most common formative assessment techniques. The purpose of this technique is to improve quality of student learning and should not be evaluative or involve grading students. This can also lead to curricular modifications when specific courses have not met the student learning outcomes. Classroom assessment can also provide important program information when multiple sections of a course are taught because it enables programs to examine if the learning goals and objectives are met in all sections of the course. It also can improve instructional quality by engaging the faculty in the design and practice of the course goals and objectives and the course impact on the program. Summative Assessment Summative assessment is comprehensive in nature, provides accountability and is used to check the level of learning at the end of the program. For example, if upon completion of a program students will have the knowledge to pass an accreditation test, taking the test would be summative in nature since it

is based on the cumulative learning experience. Program goals and objectives often reflect the cumulative nature of the learning that takes place in a program. Thus the program would conduct summative assessment at the end of the program to ensure students have met the program goals and objectives. Attention should be given to using various methods and measures in order to have a comprehensive plan. Ultimately, the foundation for an assessment plan is to collect summative assessment data and this type of data can stand-alone. Formative assessment data, however, can contribute to a comprehensive assessment plan by enabling faculty to identify particular points in a program to assess learning (i.e., entry into a program, before or after an internship experience, impact of specific courses, etc.) and monitor the progress being made towards achieving learning outcome

Measurement refers to the process by which the attributes or dimensions of some physical object are

etermined. One exception seems to be in the use of the word measure in determining the IQ of a person. The

hrase, "this test measures IQ" is commonly used. Measuring such things as attitudes or preferences also

pplies. However, when we measure, we generally use some standard instrument to determine how big, tall,

eavy, voluminous, hot, cold, fast, or straight something actually is. Standard instruments refer to instruments

uch as rulers, scales, thermometers, pressure gauges, etc. We measure to obtain information about what is.

uch information may or may not be useful, depending on the accuracy of the instruments we use, and our

kill at using them. There are few such instruments in the social sciences that approach the validity and

eliability of say a 12" ruler. We measure how big a classroom is in terms of square feet, we measure the

emperature of the room by using a thermometer, and we use Ohm meters to determine the voltage,

mperage, and resistance in a circuit. In all of these examples, we are not assessing anything; we are simply

ollecting information relative to some established rule or standard. Assessment is therefore quite different

om measurement, and has uses that suggest very different purposes. When used in a learning objective, the

efinition provided on the ADPRIMA for the behavioral verb measure is: To apply a standard scale or

measuring device to an object, series of objects, events, or conditions, according to practices accepted by

hose who are skilled in the use of the device or scale.

ssessment is a process by which information is obtained relative to some known objective or goal.

ssessment is a broad term that includes testing. A test is a special form of assessment. Tests are

ssessments made under contrived circumstances especially so that they may be administered. In other

ords, all tests are assessments, but not all assessments are tests. We test at the end of a lesson or unit. We

ssess progress at the end of a school year through testing, and we assess verbal and quantitative skills

hrough such instruments as the SAT and GRE. Whether implicit or explicit, assessment is most usefully

onnected to some goal or objective for which the assessment is designed. A test or assessment yields

nformation relative to an objective or goal. In that sense, we test or assess to determine whether or not an

bjective or goal has been obtained. Assessment of skill attainment is rather straightforward. Either the skill

xists at some acceptable level or it doesnt. Skills are readily demonstrable. Assessment of understanding is

much more difficult and complex. Skills can be practiced; understandings cannot. We can assess a persons

nowledge in a variety of ways, but there is always a leap, an inference that we make about what a person

oes in relation to what it signifies about what he knows. In the section on this site on behavioral verbs, to

ssess means To stipulate the conditions by which the behavior specified in an objective may be ascertained.

uch stipulations are usually in the form of written descriptions.

valuation is perhaps the most complex and least understood of the terms. Inherent in the idea of evaluation "value." When we evaluate, what we are doing is engaging in some process that is designed to provide

nformation that will help us make a judgment about a given situation. Generally, any evaluation process

equires information about the situation in question. A situation is an umbrella term that takes into account such

deas as objectives, goals, standards, procedures, and so on. When we evaluate, we are saying that the

rocess will yield information regarding the worthiness, appropriateness, goodness, validity, legality, etc., of

omething for which a reliable measurement or assessment has been made. For example, I often ask my

tudents if they wanted to determine the temperature of the classroom they would need to get a thermometer

nd take several readings at different spots, and perhaps average the readings. That is simple measuring. The

verage temperature tells us nothing about whether or not it is appropriate for learning. In order to do that,

tudents would have to be polled in some reliable and valid way. That polling process is what evaluation is all

bout. A classroom average temperature of 75 degrees is simply information. It is the context of the

emperature for a particular purpose that provides the criteria for evaluation. A temperature of 75 degrees may

ot be very good for some students, while for others, it is ideal for learning. We evaluate every day. Teachers,

n particular, are constantly evaluating students, and such evaluations are usually done in the context of

omparisons between what was intended (learning, progress, behavior) and what was obtained. When used in learning objective, the definition provided on the ADPRIMA site for the behavioral verb evaluate is: To

assify objects, situations, people, conditions, etc., according to defined criteria of quality. Indication of quality

must be given in the defined criteria of each class category. Evaluation differs from general classification only

n this respect.

Vous aimerez peut-être aussi