Vous êtes sur la page 1sur 161

EVALUATION IN HEALTH PROMOTION

1
General Concepts

2 4

What is Evaluation?
A process that attempts to determine as systematically and objectively as possible the relevance, effectiveness, and impact of activities in light of their objectives.

What is Evaluation?
Effective programme evaluation is a systematic way to improve and account for public health actions by involving procedures that are useful, feasible, ethical, and accurate.

General Concept
PLANNING

IMPLEMENTATION

EVALUATION

General Concept
COMMUNITY
Process Input Performance

Output
Effects

Hierarchy of Objectives
GENERAL GOAL: IMPROVE HEALTH
Medical

LONG-TERM OBJECTIVES (NCDs, etc.)

epidemiological framework Behavioural


EFFECT EVALUATION

INTERMEDIATE OBJECTIVES (risk factors, biological/behavioural)

social

framework
Contextual
PROCESS EVALUATION

operational
framework PRACTICAL PROGRAMME ACTIVITIES (Intervention)

Keys to Solid Evaluation


Pre-post data Comparison groups

Complete programme records Reliable and valid measures


Proper analytic techniques

EVALUATION IN HEALTH PROMOTION


Does Health Promotion work?
Can we demonstrate the success of Health Promotion? How can we measure success in Health Promotion? What is evaluation in Health Promotion?

EVALUATION
Making a value judgement about something.
A critical assessment of the good and bad points of an intervention, and how it can be improved.

Answers the question: Have the programme objectives been achieved?

DOES HEALTH PROMOTION WORK?


The north Karelia Project launched in 1971 was a heart disease prevention project located in an area in Finland which had the highest rate of premature deaths from coronary heart disease in Europe. The project used an integrated community-wide approach which included the mass media, the development of a schools programme, use of volunteers to act as lay educators and role models in the community, and the production of low-fat foods.

Evaluation showed that risk behaviours, such as fat consumption and smoking, declined more dramatically in North Karelia than in the rest of Finland. This change in behaviour was matched by a reduction in risk factors for CHD, such as mean serum cholesterol and blood pressure, which again was greater than for the rest of Finland. The population reported improvements in their health and general well-being. There was a greater reduction in the death rate from CHD in North Karelia than for Finland as a whole

SOME DEFINITIONS
Evaluation is the process of assessing what has been achieved (whether the specified goals, objectives and targets have been met) and how it has been achieved.

SOME DEFINITIONS
A process that attempts to determine as systematically and objectively as possible the relevance, effectiveness and impact of activities in the light of their objectives.

Effectiveness

SOME TERMS

what has been achieved

Efficiency
how the outcome has been achieved, and how good is the process (value for money, use of time & other resources)

WHY EVALUATE?
1. To assess results and to determine if objectives have been met.

2. To justify the use of resources.


3. To demonstrate success in order to compete for scarce resources. 4. To assist future planning by providing a knowledge base.

5. To improve our own practice by building on our success and learning from our mistakes. 6. To determine the effectiveness and efficiency of different methods of Health Promotion. This helps in deciding the best use of resources.

7. To win credibility and support for Health Promotion. 8. To inform other health promoters so that they dont have to reinvent the wheel. This helps others to improve their practice.

WHAT TO EVALUATE?
1. WHAT has been achieved - the outcome
2. HOW it has been achieved - the process

1 3

Types

of Evaluation 4

Types of Evaluation
Process Impact
(performance
measures) Doing the right things

Doing things right

Outcome

Evaluation Framework
Programme
- facilitators? - content? - methods? - time allotments? - materials?

Evaluation Types

Impact

Behaviour
- knowledge gain? - attitude change? - habit change? - skill development?

Health
- mortality? - morbidity? - disability? - quality of life?

TYPES OF EVALUATION
1. Process Evaluation 2. Impact Evaluation

3. Outcome Evaluation

1.

PROCESS EVALUATION

The process refers to what happens between the input and the outcome.

PE is concerned with assessing the process of programme implementation and how the programme is performing as implementation takes place.

1.

PROCESS EVALUATION

Ongoing, a method of quality control. Monitors progress of the programme, whether the planned activities are carried out efficiently, cost effectively and as scheduled.

Process Evaluation
Characteristics
Shorter-term feedback on programme implementation, content, methods, participant response, practitioner response What is working and what is not working
Uses quantitative or qualitative data

Data usually involves counts, not rates or ratios


Also called formative (internal) evaluation

Considerations

Process Evaluation

1. Sources of data -- programme data 2. Limitations of data -- completeness 3. Time frame 4. Costs

Example
e.g., number of low income women screened through a mammography programme

2. IMPACT EVALUATION
Impact refers to immediate effects of the intervention or short-term outcome. It is carried out at the end of the programme.

Characteristics

Impact Evaluation

Long- or short-term feedback on behaviours, knowledge, attitudes, beliefs Uses quantitative or qualitative data
Also called summative (external) evaluation Addresses whether we are accomplishing our intermediate objectives

Impact Evaluation
Considerations
1. Sources of data -- surveillance or programme data 2. Limitations of data (validity and reliability) 3. Time frame 4. Costs

Example
e.g., mammography screening rates

3.

OUTCOME EVALUATION

Outcome are the long-term consequences; they are usually the ultimate goals of a programme.
Outcome evaluation involves an assessment of long-term effects of a programme. More difficult & time-consuming to implement.

Characteristics

Outcome Evaluation

Long-term feedback on health status, morbidity, mortality, disability, quality of life Uses quantitative data
Also called summative (external) evaluation Often used in strategic plans Are we making a difference in long-term health outcomes?

Considerations

Outcome Evaluation

1. Sources of data -- routine surveillance data

2. Limitations of data (validity and reliability)


3. Time frame 4. Costs

Example
e.g., cancer death rates

1 3

Evaluation

Framework

Framework Development
Response to a problem:
Evaluation not applied consistently across programme areas
Evaluation not well-integrated into the day-to-day management of most programmes

Evaluation Questions Addressed


What will be evaluated? What is the programme? In what context does it exist?

What aspects of the programme will be considered when judging programme performance?
What standards must be reached for the programme to be considered successful?

Type or level of performance

Evaluation Questions Addressed


What evidence will be used to indicate how the programme has performed?
What conclusions regarding programme performance are justified by comparing the available evidence to the selected standards? How will the lessons learned from the inquiry be used to improve public health effectiveness?

Framework for Program Evaluation in Health Promotion Action Steps


6. Disseminate lessons learnt 1. Describe the programme

5. Analyse and interpret data

Standards
Utility Feasibility Propriety

4. Collect data Accuracy

2.Evaluation preview

3. Focus the evaluation design

Action Steps

Elements / considerations

Use standards to form basis for making judgments about programme performance

Conduct analysis to detect patterns in evidence

Make recommendations to take further action

Why Include Stakeholders?


Need agreement on programme goals Need agreement on purpose of evaluation one evaluation cannot meet all needs To build capacity to address health needs, and provide more control over factors affecting health Increase credibility of evaluation Increase likelihood that evaluation recommendations will be implemented

Stage of Programme Development


Planning -- evaluate to refine plans, ensure
programme is focusing on the right things

Implementation -- characterize what needs


to happen for programme to be effective; is programme being implemented as planned?

Effects, after programme has occurred -identify and account for intended and unintended programme consequences

Methods
Experimental: random assignment to
compare effect of an intervention with otherwise equivalent groups

Quasi-experimental: compare
nonequivalent groups or use multiple waves of data to set up a comparison (e.g. time series)

Observational: comparisons within a group


to explain unique features of the members (e.g. comparative case studies, cross-sectional surveys)

Ensure Use and Share Lessons Learned


Deliberate effort is needed
Rehearse eventual use of findings Translate new knowledge into action (gets easier with practice)

Communicate with all partners in atmosphere of trust

Standards for Effective Evaluation


Standards for assessing the quality of

the evaluation activities are grouped into

4 categories: Utility Standards


Feasibility Standards

Propriety Standards
Accuracy Standards

Standards
Utility: Ensures that an evaluation will serve
the information needs of intended users

Feasibility: Ensures that the evaluation will


be realistic, viable, and pragmatic

Propriety: Ensures that an evaluation is


ethical, conducted with regard for the rights and interests of those involved and affected

Accuracy: Ensures that an evaluation


produces findings that are considered correct

HOW TO EVALUATE?

Depending on its purpose, there may be a number of questions that an evaluation attempts to answer and it is important that these are made explicit in advance.
The choice of such questions will affect the methodology chosen. The questions essentially look at whether an activity achieved its purpose, and, if not, why not.

EVALUATION QUESTIONS

Evaluation tries to answer questions related to the following 4 Es:


Efficiency: whether the aims and objectives have
been attained or not?

Effectiveness: the extent to which the objectives


have lead to the desired outcome or outcomes?

Efficacy: whether the interventions have been

appropriate for the present situation (place and time)? analysis?

Economy: Cost benefit analysis and Cost Effective

Evaluation Questions
Questions related to INPUTS Questions related to OUTPUTS Questions related to OUTCOMES

Major programme components and their relationship Input Process Output Outcome
Population based
Results Results

Programme based

Resources

Activities

Utilizatio n of services

Access to services

Failures to achieve the objectives can occur at any of the three levels:
INPUTS, OUTPUTS or OUTCOMES.

INPUTS (resources) may fail to be


procured. Resources my be available, but not be

transformed into services or


required.

OUTPUTS

Services may be provided, but fail, for a


variety of reasons, to result in the anticipated

OUTCOMES.

INPUTS Questions:
Did the inputs (resources) planned arrive?

Were they sufficient to provide the health promotion planned? Were the resources transformed into program plan? (adequate management and adequate directions)

Inputs = Resources
(Human, Equipment and Commitment)

OUTPUT Questions:
- Service quantity
- Service quality - Efficiency of services

Were the services provided appropriate, relevant, and adequate?

- Acceptability to the community and the provider

- Socio-economic effects
- Technical difficulties

It is difficult to evaluate OUTPUT without evaluating PROCESS.


Process evaluation involves the monitoring and audit of an intervention. It is mainly qualitative in design, or uses non-experimental approaches. E.g. it utilizes methods such as interviews, observations and content analysis. It relates to Quality Assurance.

PROCESS

It is about assessing the operating procedures and the effect of the implementation process. E.g. You might have a very successful IRS campaign, but if it is at the expense of people being bullied into opening their houses and admitting the spraymen, then the procedure would be evaluated negatively.

Process evaluation provides

useful insights into the implementation process, for example, how interventions are interpreted and responded to by different groups.

In general, process evaluation is most useful in conjunction with outcome evaluation rather than on its own.

Consider the following As and Cs while planning our INTERVENTIONS:


8As
Availability Adequacy Appropriateness Adaptability Acceptability Accessibility Affordability Accountability

3 Cs
Completeness Comprehensiveness Continuity

The 7 Ss:

to evaluate the internal environment


Strategy

Structure Staff

Systems Skills

Style

Shared values, norms and beliefs (Culture)

to evaluate and monitor the external environment

PEST:

Political factors
Environmental factors Socio-economical factors Technological factors

What are or were the objectives of the activity being evaluated? Were the objectives set achieved? Were any improvements observed, the direct result of the activity?

OUTCOME Effects:

In some respects impact evaluation and outcome evaluation may seem synonymous.

Outcome evaluation
assesses whether a health promotion strategy has influenced peoples knowledge, behaviour or health in the desired direction.

Sometimes there is difficulty in evaluation, e.g. some effects need long time and so the effects evaluated will be in the area of process rather than outcome or impact.

PROCESS EVALUATION
1. Measuring the programme inputs i.e. the resources expended in implementing the programme in order to determine whether the programme was worthwhile (efficient and cost effective) 2. Using performance indicators to measure activity. PI provide a quantifiable measure activity. Examples are:
Number of health educational materials produced and distributed.

3. Obtaining feedback from other people e.g. colleagues and other staff.
4. Obtaining feedback from the clients or participants of HP programmes their reactions, perceptions and suggestions methods include observation, interview or questionnaires 5. Documentation e.g. reports, checklist, diaries, video-taping, slides etc.

Number of health educational materials produced and distributed.


Number of people attending educational activities. Screening uptake rates.

Uptake of physical activities formed and number of people involved.

PIs need to be identified at the planning stage. Monitoring PIs helps you to determine how well your programme is progressing.

IMPACT EVALUATION
1.

Measure changes in health awareness, knowledge and attitudes.


Measure interest shown by target groups e.g. uptake of health education materials, phone-ins, participation in activities etc.
Observation, questionaires, interviews, discussions etc. Use of attitude scales.

2. Evaluate behaviour change

- Observing what clients do. - Recording behaviour e.g. number of people attending exercise sessions, health screening, stop smoking etc. - Interview or questionaire.

3. Evaluate policy changes Introduction of pro-health policies in schools, workplaces etc. Such as safety policies, healthy food, exercise, No Smoking etc.

4. Changes in the environment


Cleaner air.

Less/no littering.
Creation of no-smoking zones/areas.

Provision of public toilets.


Provision of safe water supply and better housing. Increase in % of food premises with acceptable hygienic rating.

Reduction in Aedes breeding sites.

5.

Changes in health status


Improvements in BMI, blood pressure, fitness levels, blood cholesterol levels etc.

OUTCOME EVALUATION
This is the preferred evaluation method because it measures sustained and significant changes which have stood the test of time.
Uses hard evidence and quantitative methods.

1. Behaviour change e.g. safe sexual


practices, healthy habits and other healthier lifestyle practices.

2.Policy and legislation changes e.g.

lead-free petrol, ban on indirect tobacco advertising, compulsory use of bicycle helmets and rear seat belts, gazetting of No Smoking Areas, establishment of Safety and Health Committees in all work places etc.

3. Environmental changes e.g. provision of

jogging tracks and playgrounds in housing areas, improved public transportation system, better housing facilities, clean air and water, provision of separate motorcycle lanes at all major roads and highways etc.
reduction in morbidity, disability and mortality rate improve life expectancy reduced prevalence of risk factors

4. Changes in health status

MEASURING BEHAVIOUR CHANGE ATTRIBUTION TO INTERVENTION

1. To compare the target groups health-related behaviour before and after the intervention. change will occur with time
confounding factors difficult to eliminate

2. To compare the target groups behaviour to another group with similar characteristics (demographic, socio-economic) who were not given the programme. The control group is necessary to avoid attributing all behaviour change to the HP programme and therefore overestimating its achievement.

CHALLENGES IN EVALUATION
1.

Deciding what to measure

Some objectives are difficult to measure e.g. attitudes and behaviours. Need to select appropriate evaluation criteria and performance indicators (specific, sensitive, relevant etc.

2. Contamination of HP Outcome
HP is a long term process and can be influenced by many extraneous situational factors. How to adjust for these confounding factors? Difficult to ensure that any change detected is only due to the programme input and not to any outside influence.

3. To Evaluate When?
The timing of evaluation affects the assessment of the overall success or failure of a programme due to time effects.

Delay of impact
The effects of a programme may not be immediate e.g. behaviour change. Immediate evaluation might not yield positive results.

Decay of impact
Changes due to programme are not sustained, and after some time the situation reverts to pre-programme. Late evaluation will not yield results. Adjusting for secular trends

Many factors are already changing in the desired direction even in the absence of HP programme. Only those changes over and above the general trend may be attributed to the programme.

Backlash or boomerang effect


A backlash or unexpected result may occur at the end of the programme which may not be present in the early stages. Depending on when evaluation is done, findings may be positive or negative.

4. Is evaluation worth the effort?


Evaluation requires and consumes scarce resources. Routine work vs. new projects Evaluation is worthwhile if it will make a difference.

1
Summary

2 4

Evaluation Aim
Evaluation is not an aim itself
The aim is to learn from the experience and make programme improvements for effective national work Collaboration with national partners is key Communication is important

Evaluation is often the last thing considered or funded Programme personnel may be threatened by an evaluation
need to maintain objectivity

Organizational Issues

involvement may reduce resistance but may threaten objectivity

Trade-off between comprehensive evaluation vs nothing at all 10% Rule as you design and implement programmes

Varying degrees of intervention exposure

Special Challenges in the Evaluation of Community-Based Programmes

Difficulty in controlling intervention exposure

Running programmes in multiple locations


Accounting for community-level variance
Individuals in communities, neighborhoods, schools, worksites are correlated
Intra class correlation coefficient / design effect

In practical terms, means inflated sample size

Planning for Effective Health Promotion Evaluation

Evidence in Health Promotion


Acting on the evidence
evidence-based practice

Generating evidence
Accountability

Judgements - effectiveness, worth, value


Improving practice ours and theirs

Making the case for health promotion we are competing against illness treatment

Integrated Health Promotion Planning and Evaluation

Organisational HP Planning Template


Organisational HP Goal: Population Target Group/s:
(links with Priority Issue column in Switch reporting) ((links with Population Group column in Switch reporting)

Objective 1: Estimated Impacts1 (Qualitative &/or Quantitative) for Objective 1


Health Promotion 2 Interventions & Capacity Building strategies Screening, individual risk assessment and immunisation Social marketing and Health information Health education and skill development Community action Settings and Supportive Environments Organisational Development Workforce Development Resources Estimated 3 Reach Timelines 4 & by whom Estimated Budget from C& WH 5 program OPTIONAL Estimated Other 6 Funding sources

Total Budget per Objective Total Budget per Program Goal

Planning Health Promotion Programs


Health issue

Goal

Long term change

Determinants

Objectives

Intermediate term change

Actions on priority determinants of the issue

Strategies

Short term change

Evaluating Health Promotion Programs


Health issue

Goal

Outcome evaluation

Determinants

Objectives

Impact evaluation

Actions on priority determinants of the issue

Strategies

Process evaluation

Process evaluation use during the life of the


program. Includes participant satisfaction, quality of materials, quality of delivery etc

When do I use each type?

Impact evaluation use at the completion of

specific project stages (i.e. after sessions, at monthly intervals and/or at program completion)
incidence/prevalence of health conditions, changes in mortality, improvements in quality of life, long-term changes in behaviour (eg smoking rates)

Outcome evaluation include reductions in

Evaluation Planning Grid

New Evaluation Planning Grid

Making objectives SMART


Specific: clear and precise
Achievable: realistic Measurable: amenable to evaluation

Relevant: to the health issue, the


population group and your organisation achieving your objective

Time specific: time frame for

Objective is it SMART?

Evaluation Planning Worksheet

What are the key questions that the evaluation should answer Being strategic, rather than doing reach evaluation on every single intervention within a program

Bigger picture questions eg sustainability, who is missing out

What information do we need in order to answer these questions? How will we get this information who, when, how? Planning for analysis, reporting and dissemination Budget being realistic: what can we afford to do; what does govt/donor expect for its investment?

Overview of Program Evaluation

What is Program Evaluation?


The systematic collection of information about the activities, characteristics, and outcomes of programs to make judgments about the program, improve program effectiveness, and/or inform decisions about future programming.

Understand what is working and what isnt (program improvement)

Why Evaluate?

Make sound decisions (should the program be continued, scaled back, discontinued, enhanced?) Justify programs, explain accomplishments
Promote programs, products, and services Gain funding Guide program development

Idea of using the evaluation data to improve a program or make decisions is central to evaluation Evaluations should be conducted with a specific use for and user of the evaluation in mind

Importance of Use

Types of Evaluation based on Purpose (Intended Use)


Front-End: to guide program development (Is this
program needed? How should it be designed? What should the program outcomes be?); Used by those developing the program

Formative: to guide program improvement (What

is working? What needs to be improved? How it can be improved?); Generally used internally; Often occurs in early stages of program development future; Used internally and externally by key decision-makers (program staff, supervisors, funders); Often occur later in program development

Summative: to guide decisions about the programs

Formative vs Summative:

When the cook tastes the soup, thats

formative evaluation;
when the guest tastes it, thats summative

evaluation.

The Evaluation Process


1. Focus your evaluation

2. Develop your evaluation plan


3. Develop data collection tools

4. Collect data
5. Analyze data and interpret results

6. Communicate and use the results to improve the program or make decisions

Focusing your Evaluation

Focusing Your Evaluation A. Identify the purpose for your evaluation (clarify uses and users)

Focusing Your Evaluation


Questions to consider:

Who are your program stakeholders?


Why are you considering evaluating your program? Who are your evaluation stakeholders?

Who is the primary intended user of your evaluation?


Specifically, how will the results be used?

Focusing Your Evaluation


A.Identify the purpose for your evaluation (clarify uses and users) B. Describe your program

Introducing the logic model!

What is a Logic Model?


Diagram that summarizes key elements of a program in a way that shows the relationships among program elements Relationship between what we put in, what we do, and outcome); describes the sequence of events thought to bring about benefits or change

Everyday Logic Model


H U N G R Y

Gather Ingredients

Cook and Eat

Feel Satisfied

Logic Model
S I T U A T I O N

INPUTS

OUTPUTS

OUTCOMES

SITUATION

The conditions that give rise to the program


What needs to be done?
What are our priorities?

What do our stakeholders want done?

INPUTS
What we invest
Resources and contributions Staff, volunteers, time, money, materials, equipment, technology, partners, facilities

OUTPUTS
What we do and who we reach
Activities (training, recruitment, workshop etc.) and Products (activity guide, exhibit, curriculum, poster, etc.)

People we reach (visitors, citizens, participants, students)

OUTCOMES

The results
Skills, Opinions, Motivations action, Policies

Learning: Awareness, Knowledge, Attitudes,

Action: Behavior, Decision-making, Social Ultimate Impact: Social, Economic, and


Environmental Conditions

Outputs vs. Outcomes


Output (Activity) driven:
Teens volunteered an average of 10 hours over the summer in community service projects.

Outcome (Impact) driven:


Teens learn how to identify and solve a community need. Teens feel more responsible for their community.

Outcomes answer: SO WHAT? What difference does the program make?

Logic Model 2 other pieces


S I T U A T I O N

INPUTS

OUTPUTS

OUTCOMES

External Factors

External Factors
Context in which the program is situated and external conditions which influence the success of the program, such as:

Politics Policies Demographics Economics Culture Biophysical environment

Logic Model 2 other pieces


S I T U A T I O N

INPUTS

OUTPUTS

OUTCOMES

External Factors

Assumptions

Underlying Assumptions
Beliefs we have about the program and the way we think it will work
~

the participants

~ the way the program will operate


~ how resources will be used

Faulty assumptions lead to poor results: Are your assumptions realistic and sound?

Focusing Your Evaluation


A. Identify the purpose for your evaluation (clarify uses and users) B. Describe your evaluation program C. Consider logistics Available staff for the evaluation Timeframe

Money/other resources available


Contextual or other external factors that may affect the evaluation process

Developing your Evaluation Plan

Evaluation Plan
Evaluation Indicators Sources Data Questions How will you of Info. Collection Tools What do you know it? Who can
want to know? provide the info?

Design/ Sampling

From What will whom and when will you use to gather info be the info? gathered?

Evaluation Questions

Indicators

Sources of info

Tools

Design/ Sampling

Evaluation Questions: 2 Phases


1. Generate a list of potential evaluation questions. 2. Narrow your list - would the evaluation question: Be of interest to primary intended user?

Provide information that addresses the intended use for the evaluation results? Contribute information that is not already known? Be of continuing interest? Be feasible, in terms of time, money, and skill?

Issues can emerge that require new or revised questions; be flexible, yet do not chase every new or interesting direction that emerges

Evaluation Questions

Indicators

Sources of info

Tools

Design/ Sampling

Indicators
Evidence or information that represents the phenomena of interest: What would indicate this program objective/outcome has been achieved? What does success look like? Help you know something; they are usually specific and measurable

For each aspect you want to measure, ask: What would it look like? What kind of information is needed?

Evaluation Questions

Indicators

Sources of info

Tools

Design/ Sampling

Participants Non-participants Key informants (parents, teachers, previous participants) Program staff, administrators, or partners Program documents (logs, records, minutes of meetings)

Determine Sources of Information (Who will provide the data?)

Evaluation Indicators Sources Questions of info

Tools

Design/ Sampling

Determine Data Collection Tools


Tool
= what you use to collect data

Choice of tool dependent on:


Intended users of and use for evaluation Evaluation question, indicator, and source of information previously identified Amount of time and money

Skill and philosophy of evaluator


Weighing of advantages and disadvantages

Evaluation Indicators Sources Questions of info

Tools

Design/ Sampling

Types of Data (Information) that Can Be Collected:

Determine Data Collection Tools

Qualitative Descriptive, narrative, rich in explanation. Often collected using a smaller set of participants. Depth Quantitative Numerical measurement. Often collected through larger set of participants. In some cases, can be generalizable to a population. Breadth

Evaluation Questions

Indicators

Sources of info

Tools

Design/ Sampling

Design
When would you need to collect data & from whom if you want to show:
gain or change in participants knowledge?

participants outperform another group?


participants have the desired characteristic, behavior, or knowledge after your program? changes in participants over time? results can be attributed to your program in a causal sense?

Evaluation Questions

Indicators

Sources of info

Tools

Design/ Sampling

Sampling (Who/how many?)


A sample is a subgroup of a larger group (population)
Sampling refers to the method used to select the people, classrooms, counties, etc. to study

Sampling decisions are based on population size, what you want to know, and the resources available.
First questions to ask: What is the population of interest and is sampling needed? If the population is small, you likely will include all its members.

Developing Data Collection Tools

Interviews or Surveys?

Developing Data Collection Tools

Interview: Open-ended questions

designed to elicit thoughts, feelings, experiences, and stories from respondents; no response provided; Qualitative closed questions; Quantitative

Survey: a list of stable and primarily

Interviews
Create an Interview Guide How Many Respondents? Iterative Process

Conducting Interviews
Be consistent with questions and cues! Reassure participants that you will protect their identity

Begin and end by thanking participants


Request permission to tape

Hold in neutral, private territory where the respondent will be comfortable

Focus Groups
Group interviews that encourage participants to build on each others responses Guidelines for developing interviews are true for focus groups as well Last roughly 1-2 hours and involve 6-10 people

Surveys
Process large amounts of data
Best for generalizing results to a larger population Can be used for planning, formative, and summative types of evaluation Always pilot test your surveys!

Collect Data

Collect Data
First, review your logic model, evaluation focus, and planning matrix to ensure that you are on target with the data you collect
Look for existing data that may meet your needs before designing new data collection protocol

Set Data Collection Schedule


What data collection tools are you using? What are the sources for your data? Who will collect the data? Who can confirm the schedules? When should data be collected?

Data Collection
Maintain consistency! Data collection methods and instruments need to be pilot tested Standardize data collection

Ethics of Conducting an Evaluation


Ask only needed information

Confidentiality and anonymity


Informed consent

Implementing a Survey
How will (or will you) entice people to complete your survey?

Explain why the survey is important


Thank people for their participation

Send a postcard reminder after 3 weeks


Train surveyors to be gracious, polite, consistent, and friendly

If respondent types vary on 2 or more important dimensions, hold separate meetings

Focus Groups

Have at least 2 sessions for each group.


Continue to schedule focus groups as long as you continue to get different types of information.

Focus Group Planning


Location pleasant, neutral (i.e. library, extension office, school) Time convenient for participants Incentives

Thank and confirm location, time, date with participants in advance

Focus Group Implementation


Moderator, note-taker, recording device, quiet snacks! If possible, seat dominant personalities near moderator and shy participants across from the moderator, let participants know about the recorder in a nonthreatening way Transcribe tapes within 3-5 days

Analyze Data and Interpret Results

When the surveys come back:


Code them

Surveys

Enter data on a spreadsheet Reduce the data to summary statements and relate these back to your evaluation questions

Analyze the data


Frequencies Means Correlate one variable to another Look for significant differences

Transcription not always necessary notes are often sufficient Identify similarities and patterns Group themes into meaningful categories

Focus Groups

Communicate and Use Evaluation Results

Organize and Synthesize your data


Tie specific results to your evaluation questions, using your coded responses.

Present your findings without interpretation save that for conclusions


Use qualitative data to describe quantitative data, if available

Explain results

Draw Conclusions

Review open ended questions for insights Dont ignore unexpected results Consider unsolicited comments Avoid claiming causality (use the data suggest or the results indicate)

Make Recommendations
Focus on the most important, data-based conclusions Be practical
List specific action items

Request stakeholder review

Sharing Your Findings


Formal written report

PowerPoint
Press Release Web Site

Customize for audience and occasion

Pilot Testing
You may want to do a pilot test

in order to evaluate the effect of your program. (A pilot test is a practice run using a small group who are similar to your target audience.)

In a pilot test you collect

Pilot Testing

feedback about the program from this test group. You evaluate the pilot program, and make needed changes in your program before you carry it out with the wider audience.

It gives you a chance to see if

Pilot Testing

there are any major problems before you commit yourself to the program. idea of possible evaluation results.

Pilot testing lets you get an

Vous aimerez peut-être aussi