Vous êtes sur la page 1sur 16

1

Guidelines on Health Promotion Evaluation




Health Promotion

Health promotion is the process of enabling people to increase control over and to
improve their health.
1


Health promotion represents a comprehensive social and political process, not just
actions to improve skills and capabilities of individuals, but also involves changing of the
social, environmental and economic conditions which might improve public and individual
health.

Health promotion interventions often involve different kinds of activities, a long
time scale and several partners who may each have their own objectives. Thus evaluating
health promotion interventions is not a straight forward task.


Health Promotion Evaluation

Evaluation implies judgment based on careful assessment and critical appraisal of
given situations, which should lead to drawing sensible conclusions and making useful
proposals for future action.
2


In all evaluation, there are two fundamental elements: identifying and ranking the
criteria (values and aims); and gathering the kind of information that will make it possible to
assess the extent to which they are being met.
3


Below are some criteria which can be used to judge the worth of a health promotion
intervention:
4


Effectiveness the extent to which aims and objectives are met
Appropriateness the relevance of the intervention to needs
Acceptability whether it is carried out in a sensitive way
Efficiency whether time, money and resources are well spent, given the benefits
Equity equal provision for equal need

2
Outcome Hierarchy

Well defined outcomes are important in evaluating health promotion interventions
and facilitate better communication of what constitutes success in health promotion. Below is
an outcome model for health promotion by Nutbeam (1996):
5


Table 1: Outcome model for health promotion

Health Promotion Education Facilitation Advocacy
Actions

Health Promotion Health Social influence Healthy public
Outcomes literacy and action and organisational
practice

Intermediate Healthy Effective Healthy
Health Outcomes lifestyles services environment

Health and Mortality, morbidity, disability, quality of life
Social Outcomes


In this model, there are three levels of outcomes. Health promotion outcomes reflect
changes in personal, social and environmental factors which may improve people control over
health and thereby change the determinants of health (intermediate health outcome). The goal
of health promotion actions are to reduce mortality, morbidity and disability of the population
(health and social outcome).
3
Evaluation Cycle

Figure 1: Evaluation cycle



Needs assessment
Programme planning & formative
evaluation


Process evaluation


Impact evaluation
Outcome evaluation


Figure 1 is a simplified version of the evaluation cycle which has outlined all the
important stages in the cycle.
6



Needs Assessment

Needs assessments are conducted in order to get a comprehensive picture of the
health problems in the community and guide the choices about the type of health interventions
required.

Needs assessment can be divided into two main stages:

Stage 1 Identifying the priority health problem the purpose is to collect data and canvass a
range of opinions to determine the priority health problem. The magnitude of the problem
should be clearly specified along with details about the target group having the problem.

Stage 2 Analysis of the health problem the purpose is to collect additional data about the
factors that contribute to the health problem.
4

For details of how to perform needs assessment, you may refer to Hawe P, Degeling
D and Hall J (1990).
7



Evaluability Assessment

Evaluability assessment is a diagnostic and prescriptive tool for improving
programmes and making evaluation more useful. It is a systematic process for describing the
structure of a programme (i.e. the rationale, objectives, activities and indicators of successful
performance); and for analyzing the plausibility and feasibility for achieving objectives, the
suitability for in depth evaluation, and the acceptability to programme managers, policy
makers and programme operators.
8


In evaluability assessment, you check to see whether or not a programme satisfies a
number of preconditions for evaluation. To make sure that the programme is ready to be
evaluated, you should be able to answer the following questions:
9


1. Why will you evaluate?

This sets the evaluation design in motion. You need to identify the primary users of
the evaluation information and find out what type of information they require. Success may
mean different things to different groups of people or stakeholders who have their own
agendas and interests. For example, funders of a project may be looking for efficiency or
results which can be interpreted as cost effective. Practitioners may be looking for evidence
that their way of working is acceptable to clients and that the objective set has been achieved.
It is therefore important to be clear at the outset about whose perspectives are being addressed
in any evaluation.

2. Whom will you evaluate?

This refers to the target group of the programme: individuals, groups or community;
and the setting: school-based or home-based. For example, in a school-based drug education
programme, students, parents, teachers, administrators and community leaders all might be
evaluated.
5

3. What will you evaluate?

This relates to the targets of evaluation. For example, in the drug education
programme, you might appraise knowledge, attitudes, and behaviours of the students; ask the
teachers if the materials were easy to use; evaluate the willingness of the teachers to
implement the programme; and assess the cost effectiveness of the programme. You might
also ask the parents and community leaders about how the programme has helped the students
and community as well as areas that are in need of improvement.

Performance indicators will usually be developed to help you evaluate the
programme. These indicators should be developed in line with the following rules:

a. Identify appropriate standards (as a basis for comparison).
i. How did I perform this time compared with last time?
ii. How did I perform compared with other people?
iii. How well I perform out of a hundred? (consistency)
b. Develop quantifiable indicators.
c. Establish their relationship(s) with the relevant objective(s).
d. Is it relatively easy to collect data for measurement?

4. Where will you evaluate?

This means the place where evaluation is carried out. We should choose a site that
is comfortable, both physically and mentally, for the participants being evaluated. For
example, a participant may feel more comfortable at a place which is convenient and familiar
to him/her. The participants may prefer completing questionnaires by themselves, in the
absence of an investigator. It may be more convenient for students to complete evaluation
questionnaires in school.

5. When will you evaluate?

The timing of performing evaluation is important. The outcome of a programme
will vary at different time periods after the intervention. Some effects are immediate whilst
others are slow in emerging. Some effects are transient and others long lasting. Green (1977)
10

has highlighted some of the ways in which the evaluation of outcomes of health promotion
programmes may be influenced by timing:


6
a. The sleeper effect: if the effects of the programme become apparent after a
considerable period of time, evaluation carried out upon completion of the
programme will not assess the effects. For example, behaviour change will take
time to develop, if we evaluate too early, no effect will be observed.

b. The backsliding effect: the intervention will have a more or less immediate effect
which decreases over time. If we evaluate too late we will not measure the
immediate impact; and even we do observe the early effect, we cannot assume it to
be permanent.

c. The trigger effect: the programme sparks off a change which would have occurred
spontaneously at a later date. This may, of course have real benefits, but we have to
be careful not to overestimate the effects of the intervention.

d. The historical effect: some or all of the changes could be due to causes other than
the programme. For example, the objective of the intervention is to increase the
prevalence of a variable, and if this variable is on the increase anyway, we shall
overestimate the benefits of the intervention.

e. The contrast effect: this may occur when the programme is terminated prematurely,
or when the subjects have expectations which are not fulfilled. A consequently
embittered group of clients may act in defiance of advice on behaviour, producing a
backlash effect. Evaluation during, or soon after the intervention would measure
the benefits but not the contrasting backlash which occurred after termination of the
activity.

6. How will you evaluate?

This is about the evaluation designs that will be used. This will be discussed later.

7. Who will do the evaluation?

If the programme is to be evaluated by the health promotion specialist who are
involved in the programme, the evaluation may be biased. It is best for the evaluation to be
carried out by an external health promotion specialist. However, this may increase the cost of
the programme and is not always feasible.
7

Below are the steps in evaluability assessment:
7


Step 1 Identify the primary users of the evaluation information and find out what they need
to know.

Step 2 Define the programme define the boundaries of the programme and distinguish the
background activities from the programme.

Step 3 Specify goals and expected effects the goals should be realistic and clearly defined.
Both the intended effect (i.e. the goals) and the unintended effect (i.e. unexpected effect or
side effect) should be considered when planning the programme. For example, you may be
planning a health promotion programme to promote cervical screening, if your programme is
successful, there will be many women attending the Womans Clinic requesting a pap smear
which is beyond the capacity of the clinic.

Step 4 Ensure that the programme is plausible make sure that the intervention is effective
by clearly defining the problem first and searching through the literature for effective
interventions.

Step 5 Decide on measurable and testable programme activities and goals not all
programme activities are worth measuring or monitoring. Also that not all goals are
measurable. One has to decide what should be measured and monitored.

Step 6 Decide on what is sufficient in the evaluation one needs to make sure that there are
enough data to supply the users with information they need.

Step 7 Make sure that the programme is being implemented as intended this is the same as
process evaluation.

8
Types and Levels of Evaluation in Health Promotion
11


There are five types and levels of evaluation:

1. Formative evaluation

Formative evaluation is also called pre-testing and many people would group it
under process evaluation.

The objective of formative evaluation is to examine how well the intervention is
developed to achieve the planned change. In other words, it is used to ensure that health
promotion interventions are tailor-made to their particular, defined target group(s) and that the
intervention is in fact effective in achieving its aim.

It is the testing of the intervention with a sample of the target group. Very often,
qualitative methods such as focus groups and interviews are carried out at this stage. It is
important to pay attention to characteristics of the target group as well as the language, design
and communication channels of the proposed project. Other important issues include
relevance of the message, imagery and communication media at personal and group levels,
recall and comprehension of the message, credibility, appeal and quality of the messages or
imagery. Marketing and mass media theories would need to be applied in this stage. For
example, you are producing a pamphlet on senile dementia targeting at the elderly, you may
need to hold a focus group for the elderly soliciting their opinions about the pamphlet
(whether they understand the pamphlet, like the graphics etc.) so that you can refine the
pamphlet which suits the needs of the elderly.

2. Process evaluation

Process evaluation examines the extent to which the programme is delivered as
designed.
12
It is an essential component of any health promotion programme and is a
prerequisite of impact and outcome evaluation. One cannot assess the effectiveness of any
programme unless the programme has been implemented as desired.

In general, process evaluation employs a wide range of qualitative methods, for
example, interviews, diaries, observations and content analysis of documents. These methods
tell us a great deal about particular programme and the factors leading to its success or failure,
but they are unable to predict what would happen if the programme is replicated in other areas.
More information about qualitative evaluation can be found in Table 2.

9
Process evaluation should be able to address the following questions:

How well was the programme implemented?
Did the intervention reach the intended target recipients?
What proportion of the target recipients actually received the intervention?
Was the intervention acceptable to the recipients?
What was the satisfaction level of the recipients?

3. Impact evaluation

Both impact and outcome evaluation assess the effects of an intervention. Impact
evaluation assesses the effects of an intervention on its immediate achievements which will
bring about health outcomes (corresponding with the measurement of the programmes
objectives). These achievements can be classified generally into behavioural or
non-behavioural dimensions. Achievements in the behavioural dimension are usually changes
in awareness, attitudes, knowledge, skills and behaviour among project recipients.
Non-behavioural achievements will center on the achievements in organisational and policy
changes.

Knowledge measures

Knowledge measures aim to assess whether or not the transmission of factual
information to the programme recipients is effective and that the information can be
understood or recalled. This is usually assessed by quasi-experimental method like pre- and
post-intervention knowledge tests.

Skill measures

Skill measures aim to assess the extent to which the programme recipients can
master certain skills or perform actions to promote health. This can be assessed through
observations and demonstrations of skills in settings approximating those encountered in real
life.

Attitude measures

Attitude measures aim to assess the changes in values and believes that affect
individuals acting in a particular manner. This can be assessed through self-report inventories.

10
Behavioural measures

Behavioural measures aim to assess the changes in behaviour under normal
circumstances in real life as a result of the intervention. This can be assessed by observations
or self-report inventories.

Environmental and policy measure

Environmental and policy measures aim to assess the changes in policy (e.g.
statements, guidelines, rules and regulations) and infrastructure (e.g. participation, networks,
committees and facilities) at both organisational and community levels.

Organisational support can be measured at different levels.
13,14
These dimensions
are: purpose, structure, leadership, relationships, helpful mechanisms, rewards, expertise and
attitude change.

Community capacity can be measured in term of eight dimensions,
15,16,17
namely,
participation, commitment, self-other awareness, articulateness, conflict containment,
management of relationships and social support.

Impact evaluation tends to be the more popular choice of evaluation because it is
easier to do, less costly and less time consuming than outcome evaluation.

Below are examples of some measures of impact evaluation of an anti-smoking
programme:

Increased knowledge, e.g. effects of passive smoking
Changes of attitudes, e.g. less willing to be passive smoker
Acquiring new skills, e.g. learning relaxation methods to reduce stress in stead of
smoking
Introduction of health policies, e.g. funding to enable GPs to prescribe nicotine
replacement aids for poor people
11

4. Outcome evaluation
18


Outcome evaluation assesses the long term effects of an intervention and usually
corresponds with measurement of the goal of the programme. Outcome evaluation aims to
improve an individuals physiological and social aspects of health. It assesses whether
changes in risk factors, morbidity, mortality, disability, functional independence, equity and
quality of life will occur as a result of the intervention. Functional independence, equity and
quality of life can be examined by either a single-item measure or a composite score that is
developed based on a number of measures. Outcome evaluation is usually more complex,
costly, more time consuming and more resources required than for impact evaluation.
However, outcome evaluation is needed as it measures the sustained changes which have
stood the test of time.

Below are examples of some measures of outcome evaluation of an anti-smoking
programme:

Reduction in risk factors, e.g. reduction in number of smokers and amount of tobacco
consumed per person
Reduced morbidity, e.g. reduced hospital admission rates of respiratory illness and
coronary heart disease
Reduced mortality, e.g. reduced mortality from lung cancer

5. Economic evaluation
18


For health promotion practitioners, they often will carry out the above four types of
evaluation, but for administrators or managers, they would like to know whether the desired
results have been achieved in the most economical way and whether allocating resources to
health promotion can be justified. Cost-effectiveness analysis (CEA) and cost-benefit analysis
(CBA) (sometimes cost-utility analysis (CUA)) are then carried out to see if the spending in
health promotion is justifiable. It is often assumed that prevention is cheaper than cure and
that health promotion saves money, but it is not necessarily the case.


12
Evaluation Methods

There are two approaches in evaluation: quantitative and qualitative. Each approach
has advantages for answering certain kinds of questions. Table 2 shows the difference
between the two kinds of approaches.
Table 2: Differences between quantitative and qualitative approaches
19

Quantitative evaluation Qualitative evaluation
Larger number of subjects generalisable
to broader population
Smaller number of subjects/cases
Deductive generalizations objectivity;
strength of the scientific method;
experimental/quasi-experimental designs;
statistical analysis
Inductive process phenomenological
inquiring; naturalistic, holistic
understanding of the experience in
context; content or case analysis
Valid, reliable instrument used to gather
data; specific administration protocol
Researcher
protocol
is the instrument; less rigid
Use of standardized measures;
predetermined response categories
Able to study selected issues in depth and
in details
Rigor Flexibility, insight
Results easily aggregated for analysis and
easily presented
Understanding of what individual
variation means; deepening understanding,
insights
Can be perceived as biased, predictable, or
rigged to obtain certain results
Offers credibility of an outsider making
assessment
Results easily aggregated for analysis and
easily presented
Results are longer, detailed, variable in
content; difficult to analyse
Data include actual numbers; Data include group or individual opinions
frequencies/counts of people, events, or perceptions; relationships, anecdotal
systems changes, passage of comments, assessment of quality;
policy/legislation, trends descriptions; case studies; unanticipated
outcomes
Experimental conditions and designs to Openness to variation and multiple
control or reduce variation in extraneous directions
variables; focus on limited number of
predetermined measures





To determine which evaluation approach to use, one has to identify the
stakeholders those who determine what questions they want to have answered, and what
evidence will convince them that the programme is working and be clear about what type of
information is desired by and acceptable to the stakeholders. In general, qualitative approach
are used in formative evaluation, and mixed qualitative and quantitative approach used in
process, impact and outcome evaluation.

13

Ethical Issues

Whenever evaluation is conducted, ethical standards should be observed:

Informed consent must be obtained from the respondents to the evaluation study
All data collected must be kept in strict confidence
Respondents have the right to withdraw from the evaluation study
There must be no collection of unnecessary information from the respondents
The respondents of the evaluation study must be free from coercion
The researcher/evaluator must be value free and must not be in conflicts of interest
The researcher/evaluator must not withhold findings of the evaluation study


Is Evaluation Worth the Efforts?

Ongoing routine work which is based on previously demonstrated effectiveness or
efficiency is probably not worthwhile evaluating in depth. However, new or pilot
interventions do warrant a more thorough evaluation. It is because without evidence of their
effectiveness or efficiency, it is difficult to argue that such interventions should become
established work practices.

Evaluation is worthwhile only if it makes a difference. This means that the results of
the evaluation need to be interpreted and fed back to the relevant audiences in an accessible
form so that improvement can be effected in the programme.

Evaluation will consume resources. As a general guide, evaluation should cost
approximately 10 to 20% of the resources.











14
References

1. World Health Organization. Ottawa Charter for Health Promotion. Geneva: World
Health Organization, 1986.

2. World Health Organization. Health Programme Evaluation: Guiding Principles for Its
Application in the Managerial Process for National Health Development. Geneva: World
Health Organisation, 1981.

3. Peberdy A. Evaluation Design. In: Katz J, Peberdy A (editors). Promoting Health:
Knowledge and Practice. Basingstoke, Hampshire: Macmillan/ Open University Press,
1997.

4. Naidoo J , Wills J . Health Promotion: Foundations for Practice. 2nd ed. Edinburgh:
Baillire Tindall, 2000.

5. Nutbeam D. Health Outcomes and Health Promotion: Defining Success in Health
Promotion. Health Promotion Journal of Australia 1996;6(2): 58-60.

6. Bauman A. Qualitative Research Methods in Health Promotion. Sydney: Australian
Centre for Health Promotion Research Unit, University of NSW and University of
Sydney, 2000.

7. Hawe P, Degeling D, Hall J . Evaluating Health Promotion: A Health Workers Guide.
Sydney: Maclennan and Petty, 1990.

8. Smith MF. Evaluability Assessment: A Practical Approach. Boston: Norwell, Mass.:
Kluwer Academic Publishers, 1989.

9. McDermott RJ , Sarvela PD. Health Education Evaluation and Measurement: A
Practitioners Perspective. 2nd ed. Boston: WCB/ McGraw-Hill, 1999.

10. Green LW. Evaluation and Measurement: Some Dilemmas for Health Education.
American Journal of Public Health 1977;67(2): 155-61.

11. Tang KC. Health Promotion Evaluation Component Evaluation Protocol. Health
Promotion Project Management Lecture, 1 August 2000.
15

12. Moskowitz J . Preliminary Guidelines for Reporting Outcome Evaluation Studies of
Health Promotion and Disease Prevention Programs. In Braverman MT (editor).
Evaluating Health Promotion Programs. San Francisco, Calif.: J ossey-Bass, 1989.

13. Goodman RM, McLeory KR, Steckler AB, Hoyle RH. Development of Level of
Institutionalization Scales for Health Promotion Programs. Health Education Quarterly
1993;20(2): 161-78.

14. Pfeiffer J W, J ones J E. Weisbords Organisation Assessment Questionnaire. The Annual
Handbook for Group Facilitators. La J olla, Calif.: University Associates Publishers,
1980.

15. Eng E, Parker E. Measuring Community Competence in the Mississippi Delta: The
Interface between Programme Evaluation and Empowerment. Health Education
Quarterly 1994;21(2): 199-220.

16. Dixon J . Community Stories and Indicators for Evaluating Community Development.
Community Development Journal 1995;30(4): 327-36.

17. Goodman R, Speers M, McLeroy K, Fawcett S, Kegler M, Parker E, et al. Identifying
and Defining the Dimensions of Community Capacity to Provide a Basis for
Measurement. Health Education and Behaviour 1998;25(3): 258-78.

18. Thorogood M, Coombes Y (editors). Evaluating Health Promotion: Practice and
Methods. Oxford: Oxford University Press, 2000.

19. Capwell E, Butterfos FD, Francisco V. Choosing Effective Evaluation Methods. Health
Promotion Practice 2000;1(4): 307-13.



Prepared by Central Health Education Unit
Department of Health
Hong Kong SAR Government

13 J une 2005
Evaluation Questionnaire

We hope that the Guidelines on Health Promotion Evaluation has provided you and your
organisation with useful information on health promotion evaluation. Your feedback will
enable us to improve our future production of other guidelines for health promotion
practitioners. Please let us have your opinion on the following:

1. Do you find these Guidelines useful?

Very useful Somewhat useful Not useful

2. What information do you find most useful in these Guidelines?


3. What additional information would you like to include in future?


4. Do you have any further suggestion or comment on these Guidelines?


5. Any other comment?


6. Your information:

Name: Post/J ob Nature:

Organisation:

Tel No.: Email:

Thank you for your suggestion/comment.

Please return by fax (Fax No.: 2591 6127) or mail to
Central Health Education Unit
Centre for Health Protection
Department of Health
7/F, Southorn Centre
130 Hennessy Road
Wanchai, Hong Kong

Vous aimerez peut-être aussi