Vous êtes sur la page 1sur 12

Creating Practical Results Indicators 56

Greg Armstrong, Updated November 2009

Creating Practical Results Indicators for Development Projects


and Programmes

An Indicator Assessment Guide

Document 6

Prepared by

Greg Armstrong

This document is intended to be used in the context of group analysis of indicators, during the
Results-Based Management workshop. Updated, with links November 2009

For more information on RBM workshops and training, see

www.rbmtraining.com

©Greg Armstrong

2007-2009

Results-Based Management for Development - Document 6

www.rbmtraining.com
Creating Practical Results Indicators 57
Greg Armstrong, Updated November 2009

Finding Indicators that really work

We will be discussing indicator development and assessment for your


projects and programmes tomorrow. What we will be looking for are indicators
that will really work - not just indicators that look impressive, but practical
indicators you can use to confirm results or redirect activities as you monitor
progress in your own projects or programmes.

In preparation, some of the ideas in this indicator assessment guide might


help us work through the process. You might want to think about them before
tomorrow’s session.

Where Indicators fit in the Results Based Management Process

Clearly, there is no point in trying to select indicators if results have not been
properly identified, at least in a preliminary way. As the earlier discussions in
this RBM workshop have made clear, before we can develop useful indicators
for results, we need to clarify both the development problem we are trying to
deal with, and our assumptions about what works, what does not work in
development programming, and why.

These initial steps lay the foundation for developing any workable indicator,
and include answering a series of inter-related questions, which we have
already explored in the first part of the workshop:

1. Problem identification: Why do we need to do anything? What is the


development problem we are trying to deal with?

2. Clarifying underlying assumptions

 What do we assume about the nature, causes, and effects of the


problem?

 What are we assuming needs to change, if we are going to solve


the problem?

 What are our assumptions about links between the problem and
the development activities being planned - our implicit and often
unacknowledged ‘theories of action”?

 What assumptions are we making about the particular context or


situation we are working in and its relation to the problem and to
a solution?

3. Identifying possible results chains: What do we think are the relationships


between available resources (human, financial etc); the activities we plan
to undertake; and the short, medium and long term results we hope to
see?

The earlier discussions in this RBM workshop have reviewed a number of


examples - to illustrate relatively simple ways of making sure that:

Results-Based Management for Development - Document 6

www.rbmtraining.com
Creating Practical Results Indicators 58
Greg Armstrong, Updated November 2009

 We understand the problem before we define results;

 We have an understanding of our own and other stakeholders’


assumptions about what is and is not likely to work in solving it,
the problem;

 We have identified the four basic types of results we can expect


from these (and all) development activities, making sure we
express them in simple, jargon-free language that everybody can
understand.

The next logical step is to explore and agree on indicators that will tell us
something about the results..

Identifying Indicators is a Group Process

Identifying, agreeing upon, and testing the validity of indicators is most


effective as a group process. If it is done alone by anyone – senior policy
maker, project manager or project implementer -- the result is almost always
to produce indicators that are theoretically attractive, but ultimately unusable.

Indicator development is, in a sense, a further process of making clear our


understanding of the problem, our assumptions as to its resolution and what
we think the changed situation (the result) will look like. By trying to define
indicators, we are exploring the real, operational meanings of the results we
identify.

Often during the discussion of indicators, policy makers, project implementers


and stakeholders will find that their perceptions of what makes a useful
indicator will differ because there are underlying disagreements about the
nature of the originally defined problem, different assumptions about what
works, and different perceptions of what even the simplest of results
statements mean.

The only viable means of developing workable indicators, then, is to take the
time to work through the range of potential indicators together, with as many
of the stakeholders -- policy makers, managers, implementers and
beneficiaries -- as we can.

What is an indicator?

An indicator is just information -- data, evidence, descriptors -- that can tell us


whether a result has been achieved, or we are making progress toward it,
what type result has been achieved, and what has not been achieved. A
good indicator is key in programme, project and activity planning because it
clarifies what we mean about results, and it keeps us focused on what we
need to achieve. That is, essentially what results-based management is
about.

Results-Based Management for Development - Document 6

www.rbmtraining.com
Creating Practical Results Indicators 59
Greg Armstrong, Updated November 2009

Indicators are not proof

What is important to remember is that a results indicator is not necessarily


definitive. If it were, it would be appropriate to call it a “proof”, as DNA (we
are told - although even this is now questionable) might be called a proof of
someone’s presence at a crime site. But an indicator is usually not as reliable
as that. Most indicators, to continue the analogy, are imperfect, but
reasonable suggestions, that something has occurred - hopefully something
that shows progress toward an expected result.

Because most indicators are imperfect, making a convincing case that


progress is being made to the expected result almost always requires more
than one indicator.

Indicator Checklist:
8 steps to developing practical indicators for development results

In determining what indicators we are going to use in monitoring and reporting


on results, we need to consider both their technical utility, and the
feasibility of collecting the indicator data. A technically beautiful indicator
will be worthless if the data cannot be collected, or cannot be collected in a
timely, cost-effective, consistent way and on a regular basis.

Field experience in working through the indicator development process, for a


wide range of projects -- parliamentary development, justice reform,
education, rural development, health, conflict resolution, public service reform
and environmental management -- has confirmed eight steps as necessary
for assessment of the viability of indicators.

Skipping any one of them, even if the answers might seem obvious, can
cause problems in data collection and reporting in the long run. It is important
to keep in mind that just because one member of a group thinks an answer to
the questions raised about an indicator is obvious, this does not mean that
everybody will agree. Working as a group, and letting the group work, are
key to establishing realistic indicators with data that are collectable, useful,
and, ultimately, convincing.

1. Technical validity of the indicator: Agree on the technical validity of the


proposed indicators  will the suggested data actually measure or
accurately describe the completed activities or results?

 While it is appropriate to collect data to determine whether and


how activities have been completed in order to learn how
variations in process affect results, there should nevertheless be a
clear distinction between indicators designed to describe
completed activities, and those designed to describe or measure
changes produced by these activities -- the results.

Results-Based Management for Development - Document 6

www.rbmtraining.com
Creating Practical Results Indicators 60
Greg Armstrong, Updated November 2009

 Many quantitative indicators are likely to be accepted as


technically valid without much dissent, primarily because many
people:

a) Tend to see numbers as necessarily reflecting reality;

b) Simply do not understand them, and do not want to lose


face in asking for clarification, or

c) Feel more comfortable with the idea of “measuring” or


counting products as results than they do with “describing
results” , interpreting what the meaning of the numbers is,
and investigating the heart of results - what has changed.

 But quantitative indicators should be very carefully examined. As


we will discuss during the assessment process, much purported
quantitative data is in effect just aggregated qualitative information,
often hiding its own internal biases.

 Qualitative indicators, although these may appear initially to be


more problematic from a technical point of view, are necessary to
demonstrate how process affects results.

 If we remember that no single indicator is likely, on its own, to


provide enough evidence to confirm achievement of a result, then
we will be looking for multiple sources of information, including
qualiative data, which can often provide much richer
understanding of progress towards results, than can simple
quantitative data.

2. Availability of a baseline for the indicator: A result is a change - and if


we have indicators to tell us what the initial problem was, then we have a
baseline, to tell us whether we have made progress in working toward
results. We need to ensure that information for each indicator is currently
available or could be collected in a cost-effective way within a relatively
short period of time, to serve as the baseline for the indicator.

 If data cannot be collected for the baseline, the indicator’s utility for
performance assessment purposes will be reduced. There are
ways to mitigate this particularly through qualitative research, but
this requires additional time and resources. It is always better to
have actual baseline data than to try to “reconstruct” it (as many
development projects do).

 Difficulties in collecting data on indicators at the baseline stage


may show that data collection will never be feasible.

 If we can’t collect the data now, why do we think we will be


able to collect it later?

Results-Based Management for Development - Document 6

www.rbmtraining.com
Creating Practical Results Indicators 61
Greg Armstrong, Updated November 2009

 And, if we cannot collect baseline data - then the indicator is


relatively meaningless, and we should look for one we can
actually use, in practice.

 This is true for qualitative data as much as it is for quantitative


data. It is risky to assume that we will actually be able to have
useful interviews with stakeholders (that is, interviews that actually
give us useful data) a year from now or five years from now.

 We need to test whether people will talk to us, and assess


the clarity, reliability and validity of what they say, as part of
the baseline data collection process.

 We should not assume that documentary data will actually give us


a baseline, unless we know when the documents were published,
how and by whom data were collected, where they were collected,
and when.

3. Accessibility of the indicator data: This is related to the issue of


baseline indicator data. We need to ensure that the data are accessible,
that they can, in fact, be collected in a cost-effective, timely and regular
way. Information may exist, but if it is not accessible in a way that allows
consistent collection, it should not be used as a results indicator. There
are any number of reasons why the data may not really be accessible,
including:

 People we think will give interviews or allow themselves to be


observed may, for their own quite legitimate reasons (e.g. culture,
gender, fear) not do so.

 Documents may not exist, may be restricted in circulation for


privacy or security reasons or may be in an inaccessible format or
language.

 There may be problems of weather or geography that will make it


difficult to collect indicator data on a meaningful and relevant
schedule.

4. Relevance of the indicator: We need to ensure that the indicators relate


to activities and results for the appropriate target group, in the time
when the development activity is occurring, and in the geographic
area we are targeting. Indicators should tell us something about the
changes in understanding or actions which we expect, or hope for, in the
specific groups affected by training or other interventions - over the short
term, or over a longer period. Indicator data should also, where this is the
point of the intervention, have specific relevance to the sub-groups within
the wider community where we hope to see results -- women and
children, or minorities, for example, as distinct from men within a broader
“farming” community.

 Superficial analysis can often miss the fact that information


selected just because it is available, often does not tell us much
about the groups we are interested in, the physical area we are

Results-Based Management for Development - Document 6

www.rbmtraining.com
Creating Practical Results Indicators 62
Greg Armstrong, Updated November 2009

targeting, or the specific time in which our activities are happening


and results are expected to occur.

 Documentary data -- apparently easy to collect -- can often


provide misleading data, if we do not examine it carefully. We will
talk about examples of this in the workshop.

5. Research methods: It is easy to skip this step, but it is essential. We


must identify the research methods that will be used to collect the
indicator data. This is an important step in determining feasibility of data
collection.

 Will the indicator data be collected through surveys, documentary


analysis, observation, interviews?

 Will more than one method be used for an indicator data collection
and, if so, will there be a difference in terms of importance of the data
and the method, interpretation, sources of data, frequency of
collection?

6. Frequency: To understand whether the indicator data will be useful, we


need to understand who will use it, when they will use it, and if we can
provide the data in a timely way. We should therefore specify the
frequency of indicator data collection, analysis and reporting. This will
depend on both the complexity of the research methods, and the
availability of the people or groups identified to collect the information,
interpret it and then report it in ways able to guide action.

 Some data can and should be easily collected quarterly or


semiannually.

 Other data will be difficult to collect - remembering the threats to


accessibility - and data collection and reporting may only be
practical on a yearly basis. Some may be available only in a
summative way, at the beginning and end of the project or
programme period. Such data may tell us something about the
beginning and end of a project, but will not help much if we are
trying to learn if we are making progress, incrementally, towards a
result.

 It is important to match the availability of people or groups with


appropriate research skills to the frequency of data collection.

7. Responsibility: Who will collect the indicator data, who will interpret it,
who will manage the data collection process, and who will use the
indicator data for policy or management decisions? Following from #5
(research methods) and #6 (frequency of data collection), above, it is key
to identify the specific people or groups who will be responsible for
indicator data collection, analysis and reporting and determine if they have
the research skills appropriate for the agreed research methods.

Results-Based Management for Development - Document 6

www.rbmtraining.com
Creating Practical Results Indicators 63
Greg Armstrong, Updated November 2009

 It is a common, but dangerous error to make assumptions about


the availability of research skills relevant to collection of indicator
data. It is important to verify, not simply to assume, the availability
of people who know how to conduct and interpret interviews;
people who know how to intelligently and thoroughly analyse
documents and check the data trail in those documents; and
people who know how to conduct surveys, and provide usable data
from the surveys.

 If the people identified as responsible for collecting indicator data


do not have the appropriate research skills, we can:

1. Provide training on research methods;


2. Find other research methods;
3. Find other (external) researchers; or
4. Choose other indicators that require simpler research
methods for which we know we have available
expertise.

8. Cost: Determine how much it will cost to collect the indicator data.
Money is not the only important resource: time and opportunity costs
are vitally important.

 Cost of indicator data collection includes:

 Our organizations’ staff time for doing the actual data


collection;

 Time for data analysis and reporting;

 Money, for the logistics of the collection and for any external
expertise required for either doing the research or training
staff to do it.

 Even if we can find appropriate people to collect data, the cost


may be too high - in time, money or interference with other work to
make this practical, given the realities of our organization’s
capacity and budget. If this is the case, we may need to find
another indicator.

Attached is an indicator development worksheet which may help in the


work of assessing the viability of proposed indicators.

Next steps

When we have finished this analysis, the next session will introduce simple ways of
organizing and reporting indicator data.

Results-Based Management for Development - Document 6

www.rbmtraining.com
Creating Practical Results Indicators 64
Greg Armstrong, Updated November 2009

Indicator Assessment Worksheet

Results and 1. Performance 2. Baseline 3.Information 4. Relevance & 5. Methods 6. Frequency of data 7. Responsibility 8. Cost of data
Proposed Indicators – Indicators Accessibility Beneficiaries collection collection and analysis
indicator technical validity

Result:
 Will the data be  Can  Do you know  Who will be  What research  Who will use the indicator  Who will collect  How much will it cost,
 What will change useful to judge information be where the reached by methods will data? the data? in time or money for
because of the performance? collected soon information will the be used to your own staff to
activity or for the come from? development collect the  How often will the data be  Do these people collect and analyse
programme? baseline? activity information? collected and reported? have the skills and indicator data?
 Is the directly? the time to do the
Indicator:  Test this by information  Is this schedule feasible? job?  How much will it cost if
collecting it. really  Who will you have to train your
 What is the accessible? benefit  Who will analyse staff?
proposed indirectly? and report on the
information that  What are the information?  How much will it cost if
will tell us if the potential  Do the you have to hire
change is barriers to data indicator data external experts to do
occurring? collection? relate to either data collection or
these people training?
- language - or to a
- culture broader  Are total costs for
- security population? collecting and
- geography analysing data on this
- physical indicator reasonable,
availability given your total
- format budget, and the
limitations on your
Test accesibility staff’s time?
by collecting
data for the
baseline.

Results-Based Management for Development - Document 6

www.rbmtraining.com
Creating Practical Results Indicators 65
Greg Armstrong, Updated November 2009

Results and 1. Performance 2. Baseline 3.Information 4. Relevance & 5. Methods 6. Frequency of data 7. Responsibility 8. Cost of data
Proposed Indicators – Indicators Accessibility Beneficiaries collection collection and analysis
indicator technical validity

Result 1:

Result 2:

Results 3:

Results-Based Management for Development - Document 6

www.rbmtraining.com
Creating Practical Results Indicators 66
Greg Armstrong, Updated November 2009

Results and 1. Performance 2. Baseline 3.Information 4. Relevance & 5. Methods 6. Frequency of data 7. Responsibility 8. Cost of data
Proposed Indicators – Indicators Accessibility Beneficiaries collection collection and analysis
indicator technical validity

Results-Based Management for Development - Document 6

www.rbmtraining.com
Creating Practical Results Indicators 67
Greg Armstrong, Updated November 2009

Resources:

 UNDP’s Oslo Governance Centre, Governance Assessment Portal, which has a number of
papers on indicator development for democratic governance, human-rights based programming,
local governance and public administration reform among others.

 Other planning and management tools relevant to achieving and assessing development results,
from the Overseas Development Institute.

 The Monitoring and Evaluation News website which has a wide range of tools and discussion of
issues relevant to the evaluation of development results, including indicators and RBM.

 Examples of evaluations on a range of development topics by different donor agencies from the
UNESCO Internal Oversight portal.

 Four trusted professional resources for practical monitoring and evaluation, field management of
governance, conflict resolution and rural development projects, all incorporating a results-based
management approach.

 Common definitions of results-based management terms.

 A discussion of the compatibility of results-based management and participatory evaluations.

 From SIDA, a publication on how Appreciative Inquiry could be used with the logical framework
approach (which is commonly used in results based management), and another on the basics of
the logical framework method.

 More detail about the results-based management training from which this indicator development
guide is derived.

 Greg Armstrong’s profile, and curriculum vitae.

Results-Based Management for Development - Document 6

www.rbmtraining.com