Art:10.1007/s11266-012-9339-0 Lógicas de Evaluación

Vous aimerez peut-être aussi

Vous êtes sur la page 1sur 30

ORI GI NAL PAPER

Evaluation Logics in the Third Sector


Matthew Hall
Published online: 8 November 2012
International Society for Third-Sector Research and The Johns Hopkins University 2012
Abstract In this paper I provide a preliminary sketch of the types of logics of
evaluation in the third sector. I begin by tracing the ideals that are evident in three well-
articulated yet quite different third sector evaluation practices: the logical framework,
most signicant change stories, and social return on investment. Drawing on this
analysis, I then tentatively outline three logics of evaluation: a scientic evaluation
logic (systematic observation, observable and measurable evidence, objective and
robust experimental procedures), a bureaucratic evaluation logic (complex, step-by-
step procedures, analysis of intended objectives), and a learning evaluation logic
(openness to change, wide range of perspectives, lay rather than professional exper-
tise). These logics drawattention to differing conceptions of knowledge and expertise
and their resource implications, and have important consequences for the professional
status of the practitioners, consultants, and policy makers that contribute to and/or are
involved in evaluations in third sector organizations.
Resume Dans cet article joffre une esquisse preliminaire des divers types de
logique devaluation du tiers-secteur. Je commence par decrire les ideaux que trois
pratiques devaluation du tiers-secteur bien denies et distinctes font clairement
appara tre. A partir de cette analyse, je trace ensuite les contours de trois logiques
devaluation : une logique devaluation scientique (observation scientique,
preuves observables et mesurables, procedures dexperimentation objectives et so-
lides), une logique devaluation bureaucratique (procedures pas-a`-pas complexes,
analyse des objectifs choisis) et une logique devaluation didactique (ouverture au
changement, grande variete de perspectives, preference pour lexpertise non-pro-
fessionnelle). Ces logiques attirent notre attention sur des conceptions divergentes
du savoir et de lexpertise et leurs implications en termes de ressources. Elles ont
aussi des consequences importantes pour le statut professionnel des praticiens, des
M. Hall (&)
London School of Economics and Political Science, Houghton St, London WC2A 2AE, UK
e-mail: m.r.hall@lse.ac.uk
1 3
Voluntas (2014) 25:307336
DOI 10.1007/s11266-012-9339-0
consultants et des decideurs, qui contribuent a` et/ou sont partie integrante des
evaluations des organisations du tiers-secteur.
Zusammenfassung In diesem Beitrag prasentiere ich einen vorlaugen Entwurf
uber Formen der Bewertungslogiken im Dritten Sektor. Zunachst verfolge ich die
Leitbilder, die in drei gut verstandlichen, jedoch sehr unterschiedlichen Be-
wertungspraktiken des Dritten Sektors ersichtlich sind: das logische Rahmenwerk,
die wichtigsten Hintergrunde fur Veranderungen und die Sozialrendite. Unter
Bezugnahme auf diese Analyse prasentiere ich sodann eine vorlauge Darstellung
dreier Bewertungslogiken: eine wissenschaftliche Bewertungslogik (systematische
Beobachtung, beobachtbare und messbare Nachweise, objektive und robuste ex-
perimentelle Verfahren), eine burokratische Bewertungslogik (umfangreiche,
schrittweise Verfahren, Analyse beabsichtigter Ziele) und eine lernende Bewer-
tungslogik (Offenheit fur Veranderungen, viele unterschiedliche Perspektiven,
Laienexpertise statt professioneller Expertise). Diese Logiken lenken die Au-
fmerksamkeit auf unterschiedliche Auffassungen von Wissen und Expertise und
deren Auswirkungen auf die Ressourcen. Sie haben weitreichende Konsequenzen
fur den professionellen Status der Praktizierenden, Berater und Entscheidungstrager,
die in Organisationen des Dritten Sektors zu Bewertungen beitragen oder sich mit
diesen befassen.
Resumen En el presente documento, proporciono un esbozo preliminar de los
tipos de logica de evaluacion en el sector terciario. Comienzo por seguir el rastro a
los ideales que son evidentes en las tres practicas, bien articuladas pero muy di-
ferentes, de evaluacion del sector terciario: el marco logico, las historias de cambio
mas signicativas y la rentabilidad social de la inversion. Echando mano de este
analisis, esbozo despues provisionalmente tres logicas de evaluacion: una logica de
evaluacion cient ca (observacion sistematica, evidencia observable y mensurable,
procedimientos experimentales objetivos y robustos), una logica de evaluacion
burocratica (procedimientos complejos, paso a paso, analisis de los objetivos
perseguidos) y una logica de evaluacion de aprendizaje (apertura al cambio, amplia
gama de perspectivas, experiencia tecnica secular en vez de profesional). Estas
logicas llaman la atencion sobre diferentes concepciones del conocimiento y de la
experiencia tecnica y sobre las implicaciones de sus recursos, y tienen consecuen-
cias importantes para el estatus profesional de los profesionales, asesores y aquellos
que toman las decisiones que contribuyen a, y/o estan implicados en, evaluaciones
en organizaciones del sector terciario.
Keywords Evaluation Logics Performance measurement Accountability
Expertise
Introduction
Performance measurement and evaluation is an important and increasingly
demanding practice in the third sector (Reed and Morariu 2010; Benjamin 2008;
308 Voluntas (2014) 25:307336
1 3
Carman 2007; Eckerd and Moulton 2011; Charities Evaluation Service 2008;
Bagnoli and Megali 2011). There is a focus on how to improve the measurement
and evaluation of third sector organizations through multidimensional frameworks
(Bagnoli and Megali 2011) and balanced scorecards (Kaplan 2001) and through
linkages to strategic decision making (LeRoux and Wright 2011). Performance
measurement and evaluation is also used for a variety of purposes, such as
demonstrating accountability to donors and beneciaries, analyzing areas of good
and bad performance, and promoting the work of third sector organizations to
potential funders and a wider public audience (Ebrahim 2005; Barman 2007; Roche
1999; Fine et al. 2000; Hoefer 2000; Campos et al. 2011). This wide range of
purposes can result in a eld that is often cluttered with new ideas, novel approaches
and the latest toolkits (Jacobs et al. 2010).
A diversity of approaches can generate debate about the design of particular
performance measurement and evaluation techniques and the relative merits of the
different types of information that they produce (Charities Evaluation Service 2008;
Reed and Morariu 2010; Waysman and Savaya 1997; Eckerd and Moulton 2011). A
common disagreement in such debates concerns the claimthat case studies and stories
can be subjective whereas performance indicators and statistics are more objective
(e.g., see discussion in Jacobs et al. 2010; Wallace et al. 2007; Abma 1997; Porter and
Kramer 1999). Of course, in the context of a particular third sector organization
with particular stakeholder demands, it is likely that some methods can produce
information that is considered more reliable and valid than others. What is of interest
here, however, is that such disagreements are likely to reect (at least in part) a
normative belief in the superiority of particular approaches to performance
measurement and evaluation, rather than the inherent strengths and weaknesses of
any particular technique. Within the literature, however, there is little explicit analysis
of the normative ideals that underpin performance measurement and evaluation
practices in third sector organizations (Bouchard 2009a, b; Eme 2009). As such, the
purpose of this paper is to use an analysis of the ideals evident in different evaluation
approaches to develop a tentative sketch of the different logics of evaluation in the
third sector. Such an approach can advance understanding of performance measure-
ment and evaluation practice and theory in the third sector in three ways.
First, it directs attention to the normative properties of performance measurement
and evaluation approaches by focusing on logics, that is, the broad cultural beliefs
and rules that structure cognition and guide decision making in a eld (Friedland
and Alford 1991; Lounsbury 2008; Marquis and Lounsbury 2007). Under this
approach, multiple evaluation logics can create diversity in practice and an
argumentative battleeld regarding which evaluation practices are most appropriate
(Eme 2009). Such an approach indicates that seemingly technical debates, such as
the relative merits of indicators and narratives, can be manifestations of deeper
disagreements about what constitutes the ideal evaluation process. Developing an
understanding of evaluation logics is important because the literature on third sector
performance measurement and evaluation is under-theorized and lacking conceptual
framing (Ebrahim and Rangan 2011).
Second, it focuses attention on how different evaluation logics can privilege
different kinds of knowledge and methods of knowledge generation (Bouchard
Voluntas (2014) 25:307336 309
1 3
2009a; Eme 2009). This is critical because evaluations provide an important basis
from which third sector organizations seek to establish and maintain their legitimacy
in the eyes of different stakeholders (Ebrahim 2002, 2005; Enjolras 2009). As such,
claims of illegitimacy regarding particular evaluation approaches may be explained,
at least in part, by an analysis of conicting logics of evaluation. For example,
stakeholders are increasingly demanding that evaluation information be quantitative
in nature and directed towards demonstrating the impact of third sector organizations
(e.g., McCarthy 2007; LeRoux and Wright 2011; Benjamin 2008), an approach that
can conict with techniques focused on dialogue and story-telling. It is here that an
analysis of evaluation logics can make explicit the ideals that generate preferences for
particular forms of knowledge and information in the evaluation process. This may
help stakeholders to negotiate and potentially reconcile differences by balancing or
blending different types of information and methods of knowledge generation (e.g.,
Nicholls 2009; Waysman and Savaya 1997), whilst also illuminating the ways in
which particular approaches may be fundamentally incompatible.
The third and nal implication concerns the role of expertise in the evaluation of
third sector organizations. This can have important consequences for the role of the
evaluator in the evaluation process, and thus the professional status, knowledge and
skills of the practitioners, consultants, and policy makers that are involved in
evaluations in third sector organizations. It is also critical to issues of power, because
particular conceptions of expertise and valid information can serve to elevate the
interests of certain actors in third sector organizations whilst disenfranchising others
(Ebrahim 2002; Greene 1999; Enjolras 2009). This can be through the creation of
evaluation techniques that have a particular exclusionary mystique attached to them
(Crewe and Harrison 1998), for example, the use of words and concepts that are
difcult to translate across languages and cultures (Wallace et al. 2007). Differing
levels and types of expertise also have resource implications, which is a critical issue
for third sector organizations that are increasingly required to use more sophisticated
evaluation approaches but with limited (or no) funding for such purposes. In this way,
greater understanding of the level and types of expertise advanced by particular
evaluation logics is important in illuminating how performance measurement and
evaluation can affect whose knowledge and interests are considered more legitimate in
third sector organizations.
The remainder of the paper contains three sections. In the next section I outline
the analysis of three evaluation techniques, the logical framework, most signicant
change technique, and social return on investment. Following this, section three
draws on this analysis to provide a preliminary sketch of the different types of
evaluation logics in the third sector, namely, the scientic, bureaucratic and learning
evaluation logics. The fourth and nal section discusses the implications of the
analysis and concludes the paper.
Analysis of Evaluation Techniques
I selected three different evaluation techniques using two criteria. The rst criterion
was that the technique was well articulated and there was evidence of its use
310 Voluntas (2014) 25:307336
1 3
(although to varying degrees) within third sector organizations. The second criterion
was that the techniques exhibit clear differences in evaluation approaches to create
variation in the analysis of evaluation ideals. As such, I use an approach that is akin
to purposive sampling, that is, selecting evaluation techniques to maximize
variation, rather than seeking to obtain a representative sample from the wider
population of evaluation approaches. This approach is consistent with the aim of
developing a tentative sketch of different types of evaluation logics in the third
sector (rather than an exhaustive catalog).
Using these two criteria, I chose three techniques for analysis: (1) the Logical
Framework approach (LFA), (2) the Most Signicant Change (MSC) technique, and
(3) Social Return on Investment (SROI). All three techniques are well articulated
through an assortment of how to guides (detailed below) and there is evidence that
the techniques (or those that are very similar) are used within third sector
organizations (e.g., see Charities Evaluation Service 2008; Wallace et al. 2007;
Campos et al. 2011; Reed and Morariu 2010; Carman 2007; Eckerd and Moulton
2011; Jacobs et al. 2010; Fine et al. 2000; Hoefer 2000). Whilst the three techniques
share some common features, they present quite different approaches to evaluation,
particularly concerning the focus on quantitative and qualitative types of data, the
level of technical sophistication, the role for consultants and external experts, and
the level of participation by different stakeholders.
The empirical focus is the texts that originally described the techniques, typically
in the form of how to guides. The focus on how to guides is important in
examining the techniques in their normative form, for understanding the ways in
which the techniques themselves were developed, and for analyzing the core
assumptions and epistemologies that underlie the different techniques.
For the LFA, I analyzed the text that rst explicated the technique, that is,
Rosenberg and Posners The Logical Framework: A Managers Guide to a Scientic
Approach to Design and Evaluation (hereafter RP 1979). To analyse the MSC I
examined two texts by Davies, A Dialogical, Story-Based Evaluation Tool: The
Most Signicant Change Technique (hereafter DD 2003a) and The Most Signicant
Change (MSC) Technique: A Guide to its Use (hereafter, DD 2005). Finally, for the
SROI, I analyzed two texts: the rst by the originators of the technique, the Roberts
Enterprise Development Fund, entitled SROI Methodology: Analyzing the Value of
Social Purpose Enterprise Within a Social Return on Investment Framework
(hereafter, REDF 2001) and a second text by the New Economics Foundation (who
helped to introduce the SROI technique to the United Kingdom) entitled Measuring
Real Value: a DIY guide to Social Return on Investment (hereafter, NEF 2007).
1
1
Whilst the analysis is focused upon these texts, I also examined a variety of other guide books and texts
for each of the techniques. For the LFA: DFID (2009) Guidance on using the revised logical framework;
SIDA (2004) The Logical framework approach; BOND (2003) Logical framework analysis; W.K.
Kellogg Foundation (2004) Logic model development guide; World Bank (undated) The Logframe
handbook. For the MSC: Dart and Davies (2003b) MSC Quick Start Guide; Clear Horizons (2009) Quick
start guide MSC design; Davies (1998) An evolutionary approach to organizational learning. For the
SROI: Olsen and Nicholls (2005) A framework for approaches to SROI analysis; NEF (2008) Measuring
value: a guide to social return on investment (SROI) 2nd edition; NPC (2010) Social return on investment
position paper; Ofce of the Third Sector UK (2009) A guide to social return on investment.
Voluntas (2014) 25:307336 311
1 3
I frame the analysis around a set of factors, such as the material outputs of the
technique, its origins, the preferred methods of producing knowledge, and the role
envisioned for outside experts. Table 1 provides a full list of the factors, and the
corresponding analysis of each evaluation technique.
Logical Framework
The LFA was developed throughout the 1970s as a planning and evaluation tool
primarily for use by large bilateral and multi-lateral donor organizations, such as
USAID, Department for International Development (UK), the United Nations
Development Program and the European Commission. The technique spread to
many third sector organizations as they began increasingly to receive funds from
these and other donor organizations that used the LFA. At the heart of the LFA is
the 4 9 4 matrix. On the vertical, the project is translated into a series of categories,
namely, inputs, outputs, purpose, and goal. On the horizontal, each category is
described using a narrative summary, objectively veriable indicators, the means of
verication, and the listing of important assumptions.
The principal designers and proponents of the LFA were Rosenberg and Posner
of the consulting rm Practical Concepts Incorporated in the United States. The title
of their text is revealing as the concepts of logic and science are foregrounded,
corresponding to the origins of the LFA, which is derived from the management of
complex space age programs, such as the early satellite launchings and the
development of the Polaris submarine (RP 1979, p. 2).
The LFA was developed in response to two perceived problems with existing
evaluation approaches. First, they were viewed as unclear and subjective, where
planning was too vagueevaluators could not compare-in an objective manner-
what was planned with what actually happened (RP 1979, p. 2). Second,
evaluations were sites for disagreement that was considered unproductive:
evaluation was an adversarial process evaluators ended up using their own
judgement as to what they thought were good things and bad things (RP 1979,
p. 2).
Given the origins of the LFA, it draws directly on what is labeled the basic
scientic method, which involves viewing projects as a set of interlocking
hypotheses: if inputs, then outputs; if outputs, then purpose (RP 1979, p. 7). It is these
interlocking hypotheses that provide the LFA with some of its most recognizable
features. First, the translation of project activities and effects into the categories of
inputs, outputs, and purposes (or variations thereof). Second, the explicit outlining of a
linear chain of causality amongst these categories. The importance of the chain of
causality is highlightedvisually in the text with a graphic representation, reproducedin
Fig. 1.
Another focus of the LFA is its concern that evaluation be based on evidence
(RP 1979, p. 3), which takes the form of Objectively Veriable Indicators that are
the means for establishing what conditions will signal successful achievement of
the project objectives (RP 1979, p. 19). RP (1979) envisage the role for indicators
as follows:
312 Voluntas (2014) 25:307336
1 3
T
a
b
l
e
1
A
n
a
l
y
s
i
s
o
f
e
v
a
l
u
a
t
i
o
n
a
p
p
r
o
a
c
h
e
s
F
a
c
t
o
r
L
o
g
i
c
a
l
f
r
a
m
e
w
o
r
k
M
o
s
t
s
i
g
n
i

c
a
n
t
c
h
a
n
g
e
S
o
c
i
a
l
r
e
t
u
r
n
o
n
i
n
v
e
s
t
m
e
n
t
M
a
t
e
r
i
a
l
o
u
t
p
u
t
(
s
)
A
4
9
4
m
a
t
r
i
x
.
A
r
r
a
y
e
d
v
e
r
t
i
c
a
l
l
y
a
r
e
i
n
p
u
t
s
,
o
u
t
p
u
t
s
,
p
u
r
p
o
s
e
a
n
d
g
o
a
l
.
A
r
r
a
y
e
d
h
o
r
i
z
o
n
t
a
l
l
y
a
r
e
n
a
r
r
a
t
i
v
e
s
u
m
m
a
r
y
,
o
b
j
e
c
t
i
v
e
l
y
v
e
r
i

a
b
l
e
i
n
d
i
c
a
t
o
r
s
,
m
e
a
n
s
o
f
v
e
r
i

c
a
t
i
o
n
,
a
n
d
i
m
p
o
r
t
a
n
t
a
s
s
u
m
p
t
i
o
n
s
M
o
s
t
s
i
g
n
i

c
a
n
t
c
h
a
n
g
e
s
t
o
r
i
e
s
S
o
c
i
a
l
r
e
t
u
r
n
o
n
i
n
v
e
s
t
m
e
n
t
r
e
p
o
r
t
,
w
h
i
c
h
f
e
a
t
u
r
e
s
S
R
O
I
m
e
t
r
i
c
s
a
l
o
n
g
w
i
t
h
b
u
s
i
n
e
s
s
d
a
t
a
,
d
e
s
c
r
i
p
t
i
o
n
s
o
f
t
h
e
o
r
g
a
n
i
z
a
t
i
o
n
/
p
r
o
j
e
c
t
,
c
a
s
e
s
t
u
d
i
e
s
o
f
p
a
r
t
i
c
i
p
a
n
t
s

e
x
p
e
r
i
e
n
c
e
s
S
t
a
t
e
d
p
r
o
b
l
e
m
a
p
p
r
o
a
c
h
i
s
a
d
d
r
e
s
s
i
n
g
E
v
a
l
u
a
t
o
r
s
c
o
u
l
d
n
o
t
c
o
m
p
a
r
e
w
h
a
t
w
a
s
p
l
a
n
n
e
d
w
i
t
h
w
h
a
t
a
c
t
u
a
l
l
y
h
a
p
p
e
n
e
d
,
a
n
d
e
v
a
l
u
a
t
i
o
n
w
a
s
a
n
a
d
v
e
r
s
a
r
i
a
l
p
r
o
c
e
s
s
E
x
i
s
t
i
n
g
m
e
t
h
o
d
s
f
o
c
u
s
e
d
o
n
i
n
t
e
n
d
e
d
o
u
t
c
o
m
e
s
u
s
i
n
g
p
r
e
-
d
e

n
e
d
i
n
d
i
c
a
t
o
r
s
E
v
a
l
u
a
t
i
o
n
o
f
s
o
c
i
a
l
e
n
t
e
r
p
r
i
s
e
s
b
a
s
e
d
o
n
e
x
t
e
n
t
o
f
g
r
a
n
t
m
a
k
i
n
g
a
n
d
f
u
n
d
r
a
i
s
i
n
g
a
n
d
n
o
t
o
n
s
o
c
i
a
l
v
a
l
u
e
g
e
n
e
r
a
t
e
d
O
r
i
g
i
n
s
C
o
m
p
l
e
x
s
p
a
c
e
a
g
e
a
n
d
m
i
l
i
t
a
r
y
p
r
o
g
r
a
m
s
C
o
m
p
l
e
x
s
o
c
i
a
l
c
h
a
n
g
e
p
r
o
g
r
a
m
s
P
r
i
v
a
t
e
s
e
c
t
o
r
i
n
v
e
s
t
m
e
n
t
a
n
a
l
y
s
i
s
P
u
r
p
o
s
e
o
f
e
v
a
l
u
a
t
i
o
n
T
o
c
o
m
p
a
r
e
t
h
e
o
u
t
c
o
m
e
o
f
t
h
e
p
r
o
j
e
c
t
a
g
a
i
n
s
t
p
r
e
-
s
e
t
s
t
a
n
d
a
r
d
s
o
f
s
u
c
c
e
s
s
T
o
i
d
e
n
t
i
f
y
u
n
e
x
p
e
c
t
e
d
c
h
a
n
g
e
s
a
n
d
u
n
c
o
v
e
r
p
r
e
v
a
i
l
i
n
g
v
a
l
u
e
s
.
A
v
o
i
d
u
s
e
o
f
p
r
e
-
s
e
t
o
b
j
e
c
t
i
v
e
s
a
n
d
f
o
c
u
s
o
n
m
a
k
i
n
g
s
e
n
s
e
o
f
e
v
e
n
t
s
a
f
t
e
r
t
h
e
y
h
a
v
e
h
a
p
p
e
n
e
d
T
o
u
s
e
t
h
e
e
s
t
i
m
a
t
e
d
s
o
c
i
a
l
r
e
t
u
r
n
o
f
a
n
e
n
t
e
r
p
r
i
s
e
/
p
r
o
j
e
c
t
i
n
e
v
a
l
u
a
t
i
o
n
s
o
f
p
e
r
f
o
r
m
a
n
c
e
a
n
d
/
o
r
f
u
n
d
i
n
g
d
e
c
i
s
i
o
n
s
P
r
e
f
e
r
r
e
d
f
o
r
m
o
f
k
n
o
w
l
e
d
g
e
I
n
d
i
c
a
t
o
r
s
t
h
a
t
a
r
e
o
b
j
e
c
t
i
v
e
a
n
d
v
e
r
i

a
b
l
e
S
t
o
r
i
e
s
a
n
d
d
e
s
c
r
i
p
t
i
o
n
s
.
E
x
p
l
i
c
i
t
a
v
e
r
s
i
o
n
t
o
i
n
d
i
c
a
t
o
r
s
Q
u
a
n
t
i

a
b
l
e
,

n
a
n
c
i
a
l
P
r
e
f
e
r
r
e
d
m
e
t
h
o
d
s
o
f
p
r
o
d
u
c
i
n
g
k
n
o
w
l
e
d
g
e
S
c
i
e
n
t
i

c
m
e
t
h
o
d
s
,
s
p
e
c
i

c
a
t
i
o
n
o
f
i
n
t
e
r
l
i
n
k
e
d
c
a
u
s
e
-
e
f
f
e
c
t
r
e
l
a
t
i
o
n
s
i
n
f
o
r
m
o
f

i
f
,
t
h
e
n

s
t
a
t
e
m
e
n
t
s
,
c
o
l
l
e
c
t
i
o
n
o
f
p
r
e
-
s
e
t
i
n
d
i
c
a
t
o
r
s
,
u
s
u
a
l
l
y
b
y
o
f

c
e
s
t
a
f
f
S
t
o
r
y
-
t
e
l
l
i
n
g
a
n
d
t
h
i
c
k
d
e
s
c
r
i
p
t
i
o
n
b
y
t
h
o
s
e
c
l
o
s
e
s
t
t
o
w
h
e
r
e
c
h
a
n
g
e
s
a
r
e
o
c
c
u
r
r
i
n
g
.
P
u
r
p
o
s
i
v
e
s
a
m
p
l
i
n
g
C
o
n
d
u
c
t
a
s
t
u
d
y
w
i
t
h
i
n
d
e
p
e
n
d
e
n
t
a
n
d
o
b
j
e
c
t
i
v
e
r
e
s
e
a
r
c
h
e
r
s
.
U
s
e
s
t
a
t
i
s
t
i
c
a
l
l
y
r
o
b
u
s
t
s
a
m
p
l
i
n
g
p
r
o
c
e
d
u
r
e
s
.
A
b
s
t
r
a
c
t
i
o
n
f
r
o
m
u
n
d
e
r
l
y
i
n
g
a
c
t
i
v
i
t
i
e
s
P
r
o
j
e
c
t
a
c
t
i
v
i
t
i
e
s
a
r
e
t
r
a
n
s
l
a
t
e
d
i
n
t
o
a
h
i
e
r
a
r
c
h
y
o
f
o
b
j
e
c
t
i
v
e
s
,
n
a
m
e
l
y
i
n
p
u
t
s
,
o
u
t
p
u
t
s
,
p
u
r
p
o
s
e
a
n
d
g
o
a
l
A
c
t
i
v
i
t
i
e
s
a
r
e
t
r
a
n
s
l
a
t
e
d
i
n
t
o
s
t
o
r
i
e
s
t
h
a
t
a
r
e
t
o
l
d
b
y
t
h
o
s
e
c
l
o
s
e
s
t
t
o
t
h
e
a
c
t
i
v
i
t
i
e
s
A
c
t
i
v
i
t
i
e
s
t
r
a
n
s
l
a
t
e
d
i
n
t
o

n
a
n
c
i
a
l
t
e
r
m
s
a
n
d
t
h
e
n
c
o
n
v
e
r
t
e
d
i
n
t
o
a
n
i
n
d
e
x
R
o
l
e
o
f
e
x
t
e
r
n
a
l
a
d
v
i
s
e
r
s
N
o
n
e
s
t
a
t
e
d
J
u
d
g
e
s
o
f

b
e
s
t

s
t
o
r
i
e
s
d
r
a
w
n
f
r
o
m
w
i
t
h
i
n
t
h
e
o
r
g
a
n
i
z
a
t
i
o
n
.
N
o
e
x
p
l
i
c
i
t
r
o
l
e
f
o
r
e
x
t
e
r
n
a
l
e
x
p
e
r
t
s
E
x
t
e
r
n
a
l
p
e
r
s
o
n
s
c
a
n
a
c
t
a
s
r
e
s
e
a
r
c
h
e
r
s
,
c
o
n
d
u
c
t
a
n
a
l
y
s
i
s
a
n
d
p
r
o
v
i
d
e
a
n
a
l
y
t
i
c
a
l
s
u
p
p
o
r
t
Voluntas (2014) 25:307336 313
1 3
T
a
b
l
e
1
c
o
n
t
i
n
u
e
d
F
a
c
t
o
r
L
o
g
i
c
a
l
f
r
a
m
e
w
o
r
k
M
o
s
t
s
i
g
n
i

c
a
n
t
c
h
a
n
g
e
S
o
c
i
a
l
r
e
t
u
r
n
o
n
i
n
v
e
s
t
m
e
n
t
E
x
p
e
r
t
i
s
e
C
o
m
p
l
e
t
i
o
n
o
f
4
9
4
m
a
t
r
i
x
.
S
k
i
l
l
s
i
n
p
r
o
j
e
c
t
d
e
s
i
g
n
a
n
d
s
p
e
c
i

c
a
t
i
o
n
o
f
o
b
j
e
c
t
i
v
e
s
a
n
d
i
n
d
i
c
a
t
o
r
s
f
o
r
e
a
c
h
o
f
t
h
e
f
o
u
r
l
e
v
e
l
s
o
f
t
h
e
h
i
e
r
a
r
c
h
y
.
E
v
a
l
u
a
t
o
r
s
n
e
e
d
m
i
n
i
m
a
l
s
k
i
l
l
b
e
y
o
n
d
c
h
e
c
k
i
n
g
h
o
w
a
c
t
u
a
l
r
e
s
u
l
t
s
c
o
m
p
a
r
e
t
o
p
r
e
-
s
e
t
s
t
a
n
d
a
r
d
s
.
T
e
c
h
n
i
q
u
e
r
e
q
u
i
r
e
s
n
o
s
p
e
c
i
a
l
p
r
o
f
e
s
s
i
o
n
a
l
s
k
i
l
l
s
C
a
l
c
u
l
a
t
i
o
n
o
f
S
R
O
I
m
e
t
r
i
c
s
a
n
d
i
n
d
i
c
e
s
.
K
n
o
w
l
e
d
g
e
o
f
c
o
n
c
e
p
t
s
s
u
c
h
a
s
t
i
m
e
v
a
l
u
e
o
f
m
o
n
e
y
,
d
i
s
c
o
u
n
t
e
d
c
a
s
h

o
w
s
,
a
t
t
r
i
b
u
t
i
o
n
,
d
e
a
d
w
e
i
g
h
t
a
n
d
d
i
s
p
l
a
c
e
m
e
n
t
C
o
n

i
c
t
T
o
b
e
r
e
d
u
c
e
d
/
a
v
o
i
d
e
d
t
h
r
o
u
g
h
s
p
e
c
i

c
a
t
i
o
n
o
f
c
l
e
a
r
o
b
j
e
c
t
i
v
e
s
a
n
d
a
c
c
o
u
n
t
a
b
i
l
i
t
i
e
s
T
o
b
e
a
d
d
r
e
s
s
e
d
e
x
p
l
i
c
i
t
l
y
b
y
p
r
o
v
i
d
i
n
g
a
f
o
r
u
m
f
o
r
s
u
r
f
a
c
i
n
g
a
n
d
d
e
b
a
t
i
n
g
p
r
e
v
a
i
l
i
n
g
v
a
l
u
e
s
N
o
t
e
x
p
l
i
c
i
t
l
y
a
d
d
r
e
s
s
e
d
M
e
t
h
o
d
o
f
l
e
a
r
n
i
n
g
D
e
v
i
a
t
i
o
n
s
f
r
o
m
p
r
e
-
s
e
t
s
t
a
n
d
a
r
d
s
p
r
o
v
i
d
e
o
p
p
o
r
t
u
n
i
t
y
t
o
a
d
j
u
s
t
p
r
o
g
r
a
m
s
F
o
c
u
s
o
n
u
n
e
x
p
e
c
t
e
d
a
n
d
e
x
c
e
p
t
i
o
n
a
l
e
v
e
n
t
s
p
r
o
v
i
d
e
s
s
c
o
p
e
f
o
r
l
e
a
r
n
i
n
g
U
s
e
S
R
O
I
r
e
p
o
r
t
s
t
o
a
n
a
l
y
z
e
h
o
w
s
o
c
i
a
l
v
a
l
u
e
i
s
g
e
n
e
r
a
t
e
d
314 Voluntas (2014) 25:307336
1 3
Indicators demonstrate resultswe can use indicators to clarify exactly what
we mean by our narrative statement of objectives at each of the project levels
(RP 1979, p. 19).
This quote is revealing as the emphasis on indicators as demonstrating results is
viewed as establishing a performance specication such that even skeptics would
agree that our intended result has been achieved (RP 1979, p. 21). Here, the role of
indicators is effectively to prove to outsiders that results have indeed been achieved.
In addition, the emphasis on clarifying objectives relates directly to the desire to
avoid misunderstanding ordifferent interpretations by those involved in the
project (RP 1979, p. 18). This is closely linked to what RP (1979) viewed as one of
the key problems with extant evaluation practice; disagreements caused by
evaluators using their own judgements. In this sense, indicators are imbued with
an objective quality that evaluators do not possess, and hence are viewed as not
subject to disagreement.
This approach corresponds to wider discussion in RP (1979) on avoiding conict,
where, in contrast to other evaluation approaches, the LFA:
creates a task-oriented atmosphere in which opportunities, progress and
problems that may impede that progress can be discussed constructively.
Because the manager knows he is not being held accountable for unrealistic
objectivesHe does not need to worry that he will be blamed for factors
outside his control (RP 1979, pp. 2930).
Here, the detailed specication of objectives and indicators has two effects. First, it
is seen to remove any possibility for conict and discussions are thus constructive.
Second, it provides such clarity that being blamed for issues apparently outside
ones control is no longer possible. This corresponds to a broader view on the
evaluation process, where the evaluation task is simply to collect the data for those
key indicators and evaluate the project against its own pre-set standards of
Fig. 1 Graphic representation
of linked hypotheses from The
Logical Framework: A
Managers Guide to a Scientic
Approach to Design and
Evaluation (Rosenberg and
Posner 1979)
Voluntas (2014) 25:307336 315
1 3
success (RP 1979, p. 40). The use of quotation marks for the word evaluation is
revealing, implying that there is actually very little evaluation required; rather, a
simple comparison of actual results to pre-set standards. This reduces the role of the
evaluator to somewhat of a fact-checker, and one could easily imagine this role
being performed by very junior staff or even a computer. More fundamentally, it
suggests that real expertise relates to designing projects, not their evaluation. The
expert is the one who conceives of projects, translates activities into the hierarchy of
objectives, claries cause-and-effect relations, and establishes objectively veriable
indicators. Here, the role for the evaluator is secondary, where they may be called-
in to advise on the feasibility of data collection and to reduce its cost (RP 1979,
p. 40).
Most Signicant Change
The MSC technique was developed in the 1990s as an approach to evaluating
complex social development programs. At the center of the MSC is the regular
collection and interpretation of stories about important changes in the program that
are typically prepared by those most directly involved, such as beneciaries, clients,
and eld staff. The MSC technique has been used to evaluate programs in both
developing and developed economies.
2
The principal designers and proponents of the MSC technique were Davies and
Dart. Both completed PhD projects concerning the MSC; Davies in the UK with
eld work focused on a rural development program in Bangladesh, and Dart in
Australia with eld work focused on the agricultural sector in the state of Victoria.
The MSC technique was developed in response to several perceived deciencies
in existing evaluation practice. First, that evaluations were focused on indicators
that are abstract (DD 2003a, p. 140) and do not provide a rich picture of what is
happening [but an] overly simplied picture where organizational, social and
economic developments are reduced to a single number (DD 2005, p. 12). The
most vivid distinction between approaches is made on the cover page of DD (2005),
where a picture shows a man standing opposite a woman and child: the man says
we have this indicator that measures, to which the woman replies let me tell
you a story (see Fig. 2).
The contrast is further reinforced through an analogy, which states that a
newspaper does not summarise yesterdays important events via pages and pages of
indicatorsbut by using news stories about interesting events (DD 2005, p. 16).
Finally, the alternative names for the MSC also reveal its anti-indicator
foundations, which include labels such as Monitoring-without-indicators and
The story approach (DD 2005, p. 8).
A further perceived problem with existing evaluation approaches is that they
focus on examining intended rather than unintended changes. Here, the MSC does
not make use of pre-dened indicators (DD 2005, p. 8), and the criteria for
2
For further information about the MSC, see http://mande.co.uk/special-issues/most-signicant-change-
msc/, and www.clearhorizon.com.au/agship-techniques/most-signicant-change/. More information,
including example reports and applications, can be obtained by joining the MSC Yahoo group:
http://groups.yahoo.com/group/MostSignicantChanges/.
316 Voluntas (2014) 25:307336
1 3
selecting stories of signicant change should not be decided in advance but should
emerge through discussion of the reported changes (DD 2005, p. 32). This change
in approach is most clearly specied in the contrast made between deductive and
inductive approaches:
Indicators are often derived from some prior conception, or theory, of what is
supposed to happen (deductive). In contrast, MCS uses an inductive approach,
through participants making sense of events after they have happened (DD
2005, p. 59).
Here the focus of evaluation is quite different to techniques like the LFA, in that
there is no comparison to a pre-dened set of objectives. The focus is on identifying,
selecting, and interpreting stories after events have taken place. The evaluation
process itself becomes much more open-ended, particularly as expected outcomes
do not frame (at least explicitly) the evaluation process. This is reinforced by
guidance that only steps four, ve and six (of 10) are essential to the MSC approach,
with the possibility of excluding other steps if not deemed necessary by the
organizational context and/or reasons for using MSC.
The nature of expertise is also quite different, in that it concerns the development
and interpretation of signicant change stories after the fact. As such, the
evaluators task in the MSC is to encourage people to write stories, to help with their
selection, and to motivate and inspire others during the evaluation process. This is
likely to require skills in narrative writing, facilitation of groups, and interpreting
ambiguous events, whereas developing pre-dened indicators is likely to require
skills in project design, performance measurement, verication methods, and
quantitative data collection.
Fig. 2 Image from cover page of The Most Signicant Change (MSC) Technique: A Guide To Its Use
(Davies and Dart 2005)
Voluntas (2014) 25:307336 317
1 3
The shift in the nature of expertise in the MSC technique is also one that
requires no special professional skills (DD 2005, p. 12) and is designed to
encourage non-evaluation experts to participate (DD 2003a, p. 140). This is
reinforced through the emphasis given to the gathering of stories by eld staff and
beneciaries rather than evaluation experts, where the MSC gives those closest to
the events being monitored (e.g., the eld staff and beneciaries) the right to
identify a variety of stories that they think are relevant (DD 2005, p. 60).
The MSC is also overtly political, in the sense that disagreement and conict
are encouraged. For example, the story selection process:
involves considerable dialogue about what criteria should be used to select
winning storiesThe choice of one story over another reinforces the
importance of a particular combination of values. At the very least, the
process of discussion involved in story selection helps participants become
aware of and understand each others values (DD 2005, p. 63).
This quote reveals how the surfacing of and discussion about different values is of
critical importance in the MSC. In fact, DD (2003a, p. 138) go so far as to argue that
the deliberation and dialogue surrounding the selection of stories is the most
important part of the MSC technique. Finally, the methods of producing knowledge
under the MSC are qualitative in nature, relying on interviews, group discussions,
and narrative.
Social Return on Investment
SROI was developed in the 1990s as an approach to analyzing the value created by
social enterprises. The principal designer and proponent of SROI was the Roberts
Enterprise Development Fund in the USA. They developed the approach to provide
an estimate of the social value generated by social enterprises that they had funded.
At the center of the SROI technique is the production of an SROI report, which is
envisioned to include a set of SROI metrics, along with organizational data, project
descriptions, and case studies of participant experiences. SROI has been used
predominately in developed economies such as the USA, but has also been
introduced to the UK through, for example, the New Economics Foundation.
SROI was developed in response to concerns over existing approaches to
philanthropy and their associated techniques of evaluation. In particular, REDF
(2001, pp. 1011) distinguish Transactive Philanthropy from Investment
Philanthropy, where the problem with transactive philanthropy is that:
success is dened as the amount of ones perceived value created in the
sectorthe number of grants given and by the size of ones assets. There is
often no real connection made between the dollars one provides third sector
organizations and the social value generated from that support (REDF 2001,
p. 10).
This quote highlights concerns over the denition of success used by transactive
philanthropists, and contrasts this to an investment philanthropy approach that
makes much stronger connections between the provision of funds and social value
318 Voluntas (2014) 25:307336
1 3
generated. It is here that SROI becomes linked to investment philanthropy and its
ideals of long-term value creation. The title of the NEF (2007) text is revealing in
this respect, as it refers to measuring real value, giving the impression that it is
only through the use of SROI that the true effects of projects are captured.
The origins of the SROI are rooted in private sector evaluation approaches and
their associated techniques, where special attention is given to the application of
traditional, for-prot nancial metrics to non-traditional, nonprot, social purpose
enterprises (REDF 2001, p. 7). These traditional nancial metrics include
standard investment analysis toolsdiscounted cash ownet present value
analysis (REDF 2001, p. 14). The private sector linkages also extend to the
presentation of the analysis, where SROI reports are similar to for prot company
stock reports (REDF 2001, p. 13).
Given the application of nancial metrics, a key focus of SROI is quantifying, in
nancial terms, the outcomes of the organization and/or its projects. In particular,
the original formulation of the SROI required the calculation of six SROI metrics:
These rst three metrics measure what a social purpose enterprise is
returning to the community. The next three metrics compare these returns
against the philanthropic investments required to generate them. This
comparison of returns generated to investments is articulated in the Index of
ReturnAn Index greater than one shows that excess value is generated. If the
Index is less than one, value is lost (REDF 2001, p. 18).
Here, the SROI metrics make two connected translations. First, the effects of the
activities of a social enterprise are expressed in a nancial return. Second, this
measure of return, along with a measure of investment, is used to make a further
translation into an index. It is at this stage that the index is used to make an
evaluation of the enterprise and/or project, that is, whether it lost or created value, as
illustrated in Fig. 3 below:
Whilst both the REDF (2001) and NEF (2007) guides show the mechanics of
these translations in some detail, actually conducting such analysis does requires
certain skills, such as numeracy, nancial literacy, an afnity with Excel, and an
understanding of concepts such as the time value of money and discounted cash
ows. Skills in planning are also required, with the SROI approach following a
sequence of steps. For example, NEF (2007) present the 10 stages of an SROI
Fig. 3 Description of the SROI ratio from Measuring Real Value: a DIY guide to Social Return on
Investment (NEF 2007)
Voluntas (2014) 25:307336 319
1 3
analysis, with the end of most stages requiring the completion of checklists to
ensure that the stage is properly completed before proceeding to the next stage in the
process.
In the SROI approach there is also a concern with providing contextual
information to support interpretation of numerical data, highlighted in the guidance
that your nal report should comprise much more than the social returns
calculated[you should use] supplemental information such as participant surveys
and other data that help to convey the story behind the results (NEF 2007, p. 53).
This quote is revealing, however, in that such data is viewed as supplemental and
being behind the results, and thus gives the clear impression that it is secondary to
the primary focus on metrics.
The SROI also seeks to adjust outcomes to reect the inuence of outside factors.
For example, outcomes of projects should account for attribution (NEF 2007, p. 27)
and be adjusted further for deadweight (NEF 2007, p. 27), that is, the extent to
which the outcomes would have happened anyway (NEF 2007, p. 27). This resonates
with the use of a control group to ensure that the observed effects were indeed caused
by the project (i.e., the treatment condition) and not exogenous factors.
The SROI also employs specic terminology to describe the approach itself. In
particular, NEF (2007, p. 10) state, we use the word study to describe the process
of preparing an SROI report and we refer to the person doing the work as the SROI
researcher. Such a researcher is also imbued with particular characteristics, such
as independence and objectivity (NEF 2007, p. 36). In general, such terminology
shows a concern with presenting the SROI in a particular light, specically that of
an independent and objective research study.
Outline of Evaluation Logics
Drawing on the above analysis, in this section I provide a preliminary sketch of the
types of logics of evaluation in the third sector, i.e., the broad cultural beliefs and
rules that structure cognition and shape evaluation practice in third sector
organizations (c.f., Friedland and Alford 1991; Lounsbury 2008; Marquis and
Lounsbury 2007). The evaluation logics were developed in three steps. In the rst
step I used the above analysis of the LFA, MSC, and SROI approaches to identify
evaluation ideals. Steps two and three sought to broaden the analysis beyond these
specic evaluation approaches. In the second step I analyzed a broader range of
evaluation practices, such as scorecards (e.g., Kaplan 2001), outcome frame-
works (e.g., Urban Institute 2006), participatory methods (e.g., Keystone Account-
ability, undated) and expected return methods (Acumen Fund 2007). In the third
and nal step I examined writings on evaluation practice in the third sector, drawn
from the third sector, evaluation and social development literatures, to locate
discussion of and or reference to these evaluation ideals.
3
Overall, this analysis
3
The academic journals examined included Nonprot and Voluntary Sector Quarterly, Voluntas,
Nonprot Management and Leadership, American Journal of Evaluation, Evaluation, Public Adminis-
tration and Development, and the Journal of International Development.
320 Voluntas (2014) 25:307336
1 3
resulted in the development of three logics of evaluation: a scientic evaluation
logic, a bureaucratic evaluation logic and a learning evaluation logic.
4
In the
analysis that follows, I develop a set of ideals that are characteristic of each
evaluation logic, provide linkages to prior literature, as well as examples from
various evaluation techniques that illustrate the ideals. To help compare and contrast
each evaluation logic, the ideals are grouped according to one of four questions:
what makes a quality evaluation, what characterizes a good evaluation process,
what is the focus of evaluation, and what is the role of the evaluator. A summary of
each evaluation logic, its ideals and associated examples is provided in Table 2.
Scientic Evaluation Logic
The scientic evaluation logic echoes the scientic method, with a strong focus on
systematic observation, gathering of observable and measurable evidence, and a
concern with objective and robust experimental procedures. Its ideals are those of
proof, objectivity, anti-conict and reduction, and the evaluators role is that of a
scientist.
Of fundamental concern to the scientic evaluation logic is that the evaluation
process is focused on establishing proof, that is, that the claims made about the
effects of projects must be demonstrated by the use of evidence, and that alternative
explanations for those effects have been considered and ruled out. Surveys show
that third sector organizations are increasingly being asked to provide proof of
causality and attribute outcomes to specic interventions (Charities Evaluation
Service 2008), and to gather data that shows the concrete, tangible changes that
have resulted from the support of foundations (Easterling 2000). These concerns are
evident in the LFA, where veriable indicators provide evidence of project effects.
In the SROI, there is explicit concern with taking account of external inuences via
the concept of attribution. Furthermore, echoing the use of a control group in
randomized trials, the concept of deadweight demands consideration of what
effects would have transpired in the event that the project was not conducted.
Similarly, Acumens Best Available Charitable Option (BACO) ratio requires
discounting social impact to that which can be credited specically to Acumens
nancing (Acumen Fund 2007, p. 3).
Along with proof, evaluative judgements and data collection processes must be
objective under a scientic evaluation logic. That is, they are not inuenced by the
personal feelings or preferences of the evaluator or other participants in the
evaluation process. This resonates with Fowler (2002), who argues that under a
hard science approach, attempts to evaluate NGO performance are characterized
as objective, in the sense that knowledge generated is independent of the persons
doing the observing. Similarly, Blalock (1999, p. 139) states that scientic
evaluations are those that are carried out by researchers independent of the
program being studied. Here, evaluators embody a professional expertise
4
Here the term scientic functions as a descriptive label to characterze a particular approach to
evaluation. The use of this term makes no claims about the scientic merits or value of this approach
relative to other evaluation approaches.
Voluntas (2014) 25:307336 321
1 3
T
a
b
l
e
2
E
v
a
l
u
a
t
i
o
n
l
o
g
i
c
s
i
n
t
h
e
t
h
i
r
d
s
e
c
t
o
r
S
c
i
e
n
t
i

c
e
v
a
l
u
a
t
i
o
n
l
o
g
i
c
e
v
a
l
u
a
t
i
o
n
e
c
h
o
e
s
t
h
e
s
c
i
e
n
t
i

c
m
e
t
h
o
d
,
w
i
t
h
a
s
t
r
o
n
g
f
o
c
u
s
o
n
s
y
s
t
e
m
a
t
i
c
o
b
s
e
r
v
a
t
i
o
n
,
g
a
t
h
e
r
i
n
g
o
f
o
b
s
e
r
v
a
b
l
e
a
n
d
m
e
a
s
u
r
a
b
l
e
e
v
i
d
e
n
c
e
,
a
n
d
a
c
o
n
c
e
r
n
w
i
t
h
o
b
j
e
c
t
i
v
e
a
n
d
r
o
b
u
s
t
e
x
p
e
r
i
m
e
n
t
a
l
p
r
o
c
e
d
u
r
e
s
B
u
r
e
a
u
c
r
a
t
i
c
e
v
a
l
u
a
t
i
o
n
l
o
g
i
c
e
v
a
l
u
a
t
i
o
n
i
s
r
o
o
t
e
d
i
n
i
d
e
a
l
s
o
f
r
a
t
i
o
n
a
l
p
l
a
n
n
i
n
g
,
w
i
t
h
a
s
t
r
o
n
g
f
o
c
u
s
o
n
c
o
m
p
l
e
x
,
s
t
e
p
-
b
y
-
s
t
e
p
p
r
o
c
e
d
u
r
e
s
,
t
h
e
l
i
m
i
t
i
n
g
o
f
d
e
v
i
a
t
i
o
n
s
f
r
o
m
s
u
c
h
p
r
o
c
e
d
u
r
e
s
,
a
n
d
a
n
a
l
y
s
i
s
o
f
t
h
e
a
c
h
i
e
v
e
m
e
n
t
o
f
i
n
t
e
n
d
e
d
o
b
j
e
c
t
i
v
e
s
L
e
a
r
n
i
n
g
e
v
a
l
u
a
t
i
o
n
l
o
g
i
c
e
v
a
l
u
a
t
i
o
n
p
r
i
v
i
l
e
g
e
s
a
n
o
p
e
n
n
e
s
s
t
o
c
h
a
n
g
e
a
n
d
t
h
e
u
n
e
x
p
e
c
t
e
d
,
t
h
e
i
n
c
o
r
p
o
r
a
t
i
o
n
a
n
d
c
o
n
s
i
d
e
r
a
t
i
o
n
o
f
a
w
i
d
e
r
a
n
g
e
o
f
v
i
e
w
s
a
n
d
p
e
r
s
p
e
c
t
i
v
e
s
,
a
n
d
a
f
o
c
u
s
o
n
l
a
y
r
a
t
h
e
r
t
h
a
n
p
r
o
f
e
s
s
i
o
n
a
l
e
x
p
e
r
t
i
s
e
Q
u
e
s
t
i
o
n
s
I
d
e
a
l
(
s
)
E
x
a
m
p
l
e
s
f
r
o
m
e
v
a
l
u
a
t
i
o
n
t
e
c
h
n
i
q
u
e
s
I
d
e
a
l
(
s
)
E
x
a
m
p
l
e
s
f
r
o
m
e
v
a
l
u
a
t
i
o
n
t
e
c
h
n
i
q
u
e
s
I
d
e
a
l
(
s
)
E
x
a
m
p
l
e
s
f
r
o
m
e
v
a
l
u
a
t
i
o
n
t
e
c
h
n
i
q
u
e
s
1
.
W
h
a
t
m
a
k
e
s
a

q
u
a
l
i
t
y

e
v
a
l
u
a
t
i
o
n
?
P
r
o
o
f
c
l
a
i
m
s
m
a
d
e
a
b
o
u
t
e
f
f
e
c
t
s
o
f
p
r
o
j
e
c
t
s
m
u
s
t
b
e
p
r
o
v
e
d
t
h
r
o
u
g
h
t
h
e
u
s
e
o
f
e
v
i
d
e
n
c
e
.
A
l
t
e
r
n
a
t
i
v
e
e
x
p
l
a
n
a
t
i
o
n
s
m
u
s
t
b
e
a
c
c
o
u
n
t
e
d
f
o
r
.

I
n
d
i
c
a
t
o
r
s
d
e
m
o
n
s
t
r
a
t
e
r
e
s
u
l
t
s

(
L
F
A
)
A
c
c
o
u
n
t
f
o
r

a
t
t
r
i
b
u
t
i
o
n

a
n
d

d
e
a
d
w
e
i
g
h
t

(
S
R
O
I
)
D
i
s
c
o
u
n
t
i
n
g
s
o
c
i
a
l
i
m
p
a
c
t
(
B
A
C
O
)
C
a
t
e
g
o
r
i
z
a
t
i
o
n
a
c
t
i
v
i
t
i
e
s
a
n
d
/
o
r
e
f
f
e
c
t
s
o
f
p
r
o
j
e
c
t
s
m
u
s
t
b
e
t
r
a
n
s
l
a
t
e
d
i
n
t
o
p
r
e
-
d
e

n
e
d
c
a
t
e
g
o
r
i
e
s
P
r
o
j
e
c
t
a
c
t
i
v
i
t
i
e
s
t
r
a
n
s
l
a
t
e
d
i
n
t
o
f
o
u
r
c
a
t
e
g
o
r
i
e
s
:

i
n
p
u
t
s

o
u
t
p
u
t
s

p
u
r
p
o
s
e

a
n
d

g
o
a
l

(
L
F
A
)
V
a
l
u
e
g
e
n
e
r
a
t
e
d
b
y
p
r
o
j
e
c
t
s
t
r
a
n
s
l
a
t
e
d
i
n
t
o
t
h
r
e
e
c
a
t
e
g
o
r
i
e
s
:

e
c
o
n
o
m
i
c
v
a
l
u
e

s
o
c
i
o
-
e
c
o
n
o
m
i
c
v
a
l
u
e

a
n
d

s
o
c
i
a
l
v
a
l
u
e

(
S
R
O
I
)
F
i
n
a
n
c
i
a
l
,
c
u
s
t
o
m
e
r
,
i
n
t
e
r
n
a
l
p
r
o
c
e
s
s
e
s
,
l
e
a
r
n
i
n
g
p
e
r
s
p
e
c
t
i
v
e
s
(
B
S
C
)
R
i
c
h
n
e
s
s
e
v
a
l
u
a
t
i
o
n
s
h
o
u
l
d
f
o
c
u
s
o
n
a
n
a
l
y
z
i
n
g
t
h
e
f
u
l
l
e
s
t
p
o
s
s
i
b
l
e
r
a
n
g
e
a
n
d
v
a
r
i
a
t
i
o
n
o
f
p
r
o
j
e
c
t
e
f
f
e
c
t
s

T
h
i
c
k
d
e
s
c
r
i
p
t
i
o
n

(
M
S
C
)
I
m
p
a
c
t
o
f
n
o
n
p
r
o

t
s
c
a
n
b
e

i
n
t
e
n
d
e
d
o
r
u
n
i
n
t
e
n
d
e
d
,
p
o
s
i
t
i
v
e
o
r
n
e
g
a
t
i
v
e

(
I
P
A
L
)
C
a
s
e
s
t
u
d
i
e
s
o
f
p
a
r
t
i
c
i
p
a
n
t
s

e
x
p
e
r
i
e
n
c
e
s
(
S
R
O
I
)

M
u
l
t
i
d
i
m
e
n
s
i
o
n
a
l
f
r
a
m
e
w
o
r
k
f
o
r
m
e
a
s
u
r
i
n
g
a
n
d
m
a
n
a
g
i
n
g
n
o
n
p
r
o

t
e
f
f
e
c
t
i
v
e
n
e
s
s

(
B
S
C
)
C
o
m
m
o
n
o
u
t
c
o
m
e
i
n
d
i
c
a
t
o
r
s
(
C
O
F
)

T
o
t
a
l
O
u
t
p
u
t

a
n
d

S
o
c
i
a
l
i
m
p
a
c
t

(
B
A
C
O
)
322 Voluntas (2014) 25:307336
1 3
T
a
b
l
e
2
c
o
n
t
i
n
u
e
d
S
c
i
e
n
t
i

c
e
v
a
l
u
a
t
i
o
n
l
o
g
i
c
e
v
a
l
u
a
t
i
o
n
e
c
h
o
e
s
t
h
e
s
c
i
e
n
t
i

c
m
e
t
h
o
d
,
w
i
t
h
a
s
t
r
o
n
g
f
o
c
u
s
o
n
s
y
s
t
e
m
a
t
i
c
o
b
s
e
r
v
a
t
i
o
n
,
g
a
t
h
e
r
i
n
g
o
f
o
b
s
e
r
v
a
b
l
e
a
n
d
m
e
a
s
u
r
a
b
l
e
e
v
i
d
e
n
c
e
,
a
n
d
a
c
o
n
c
e
r
n
w
i
t
h
o
b
j
e
c
t
i
v
e
a
n
d
r
o
b
u
s
t
e
x
p
e
r
i
m
e
n
t
a
l
p
r
o
c
e
d
u
r
e
s
B
u
r
e
a
u
c
r
a
t
i
c
e
v
a
l
u
a
t
i
o
n
l
o
g
i
c
e
v
a
l
u
a
t
i
o
n
i
s
r
o
o
t
e
d
i
n
i
d
e
a
l
s
o
f
r
a
t
i
o
n
a
l
p
l
a
n
n
i
n
g
,
w
i
t
h
a
s
t
r
o
n
g
f
o
c
u
s
o
n
c
o
m
p
l
e
x
,
s
t
e
p
-
b
y
-
s
t
e
p
p
r
o
c
e
d
u
r
e
s
,
t
h
e
l
i
m
i
t
i
n
g
o
f
d
e
v
i
a
t
i
o
n
s
f
r
o
m
s
u
c
h
p
r
o
c
e
d
u
r
e
s
,
a
n
d
a
n
a
l
y
s
i
s
o
f
t
h
e
a
c
h
i
e
v
e
m
e
n
t
o
f
i
n
t
e
n
d
e
d
o
b
j
e
c
t
i
v
e
s
L
e
a
r
n
i
n
g
e
v
a
l
u
a
t
i
o
n
l
o
g
i
c
e
v
a
l
u
a
t
i
o
n
p
r
i
v
i
l
e
g
e
s
a
n
o
p
e
n
n
e
s
s
t
o
c
h
a
n
g
e
a
n
d
t
h
e
u
n
e
x
p
e
c
t
e
d
,
t
h
e
i
n
c
o
r
p
o
r
a
t
i
o
n
a
n
d
c
o
n
s
i
d
e
r
a
t
i
o
n
o
f
a
w
i
d
e
r
a
n
g
e
o
f
v
i
e
w
s
a
n
d
p
e
r
s
p
e
c
t
i
v
e
s
,
a
n
d
a
f
o
c
u
s
o
n
l
a
y
r
a
t
h
e
r
t
h
a
n
p
r
o
f
e
s
s
i
o
n
a
l
e
x
p
e
r
t
i
s
e
Q
u
e
s
t
i
o
n
s
I
d
e
a
l
(
s
)
E
x
a
m
p
l
e
s
f
r
o
m
e
v
a
l
u
a
t
i
o
n
t
e
c
h
n
i
q
u
e
s
I
d
e
a
l
(
s
)
E
x
a
m
p
l
e
s
f
r
o
m
e
v
a
l
u
a
t
i
o
n
t
e
c
h
n
i
q
u
e
s
I
d
e
a
l
(
s
)
E
x
a
m
p
l
e
s
f
r
o
m
e
v
a
l
u
a
t
i
o
n
t
e
c
h
n
i
q
u
e
s
2
.
W
h
a
t
c
h
a
r
a
c
t
e
r
z
e
s
a

g
o
o
d

e
v
a
l
u
a
t
i
o
n
p
r
o
c
e
s
s
?
O
b
j
e
c
t
i
v
e
e
v
a
l
u
a
t
i
v
e
j
u
d
g
e
m
e
n
t
s
a
n
d
d
a
t
a
c
o
l
l
e
c
t
i
o
n
p
r
o
c
e
s
s
e
s
a
r
e
n
o
t
i
n

u
e
n
c
e
d
b
y
p
e
r
s
o
n
a
l
f
e
e
l
i
n
g
s
o
r
p
r
e
f
e
r
e
n
c
e
s

O
b
j
e
c
t
i
v
e
l
y
v
e
r
i

a
b
l
e
i
n
d
i
c
a
t
o
r
s

(
L
F
A
)

I
n
d
e
p
e
n
d
e
n
t
a
n
d
o
b
j
e
c
t
i
v
e
r
e
s
e
a
r
c
h
e
r
s

(
S
R
O
I
)
S
e
q
u
e
n
t
i
a
l
t
h
e
e
v
a
l
u
a
t
i
o
n
p
r
o
c
e
e
d
s
a
c
c
o
r
d
i
n
g
t
o
a
s
t
e
p
-
b
y
-
s
t
e
p
p
r
o
c
e
s
s
.
E
a
c
h
s
t
a
g
e
m
u
s
t
b
e
c
o
m
p
l
e
t
e
d
b
e
f
o
r
e
m
o
v
i
n
g
t
o
t
h
e
n
e
x
t
s
t
a
g
e
.
T
h
e

1
0
s
t
a
g
e
s
o
f
a
N
E
F
S
R
O
I
a
n
a
l
y
s
i
s

,
u
s
e
o
f

c
h
e
c
k
l
i
s
t
s

i
n
o
r
d
e
r
t
o
p
r
o
c
e
e
d
t
o
n
e
x
t
s
t
a
g
e
(
S
R
O
I
)
E
g
a
l
i
t
a
r
i
a
n
e
v
a
l
u
a
t
i
o
n
a
p
p
r
o
a
c
h
u
s
e
s
t
e
c
h
n
i
q
u
e
s
a
n
d
c
o
n
c
e
p
t
s
t
h
a
t
a
r
e
r
e
a
d
i
l
y
u
n
d
e
r
s
t
o
o
d
b
y
p
a
r
t
i
c
i
p
a
n
t
s
w
i
t
h
o
u
t
n
e
e
d
f
o
r
t
r
a
i
n
i
n
g

N
o
s
p
e
c
i
a
l
p
r
o
f
e
s
s
i
o
n
a
l
s
k
i
l
l
s

(
M
S
C
)
U
s
e
o
f

s
t
o
r
y
-
t
e
l
l
i
n
g

(
M
S
C
)
U
s
e
o
f

c
h
a
n
g
e
j
o
u
r
n
a
l
s
i
n
w
h
i
c
h
s
t
a
f
f
r
e
c
o
r
d
t
h
e
i
n
f
o
r
m
a
l
f
e
e
d
b
a
c
k
a
n
d
c
h
a
n
g
e
s
t
h
a
t
t
h
e
y
o
b
s
e
r
v
e
i
n
t
h
e
i
r
d
a
i
l
y
w
o
r
k

(
I
P
A
L
)
A
v
o
i
d

l
e
n
g
t
h
y
a
c
a
d
e
m
i
c
e
v
a
l
u
a
t
i
o
n
s
a
n
d
c
o
m
p
l
e
x
,
m
e
a
n
i
n
g
l
e
s
s
s
t
a
t
i
s
t
i
c
a
l
a
n
a
l
y
s
e
s

(
C
O
F
)
A
n
t
i
-
c
o
n

i
c
t
a
v
o
i
d
a
n
c
e
o
f
d
i
s
a
g
r
e
e
m
e
n
t
s
a
n
d
c
o
n

i
c
t
s
d
u
r
i
n
g
e
v
a
l
u
a
t
i
o
n
p
r
o
c
e
s
s
M
o
v
e
a
w
a
y
f
r
o
m
e
v
a
l
u
a
t
i
o
n
a
s
a
n

a
d
v
e
r
s
a
r
i
a
l
p
r
o
c
e
s
s

(
L
F
A
)
Voluntas (2014) 25:307336 323
1 3
T
a
b
l
e
2
c
o
n
t
i
n
u
e
d
S
c
i
e
n
t
i

c
e
v
a
l
u
a
t
i
o
n
l
o
g
i
c
e
v
a
l
u
a
t
i
o
n
e
c
h
o
e
s
t
h
e
s
c
i
e
n
t
i

c
m
e
t
h
o
d
,
w
i
t
h
a
s
t
r
o
n
g
f
o
c
u
s
o
n
s
y
s
t
e
m
a
t
i
c
o
b
s
e
r
v
a
t
i
o
n
,
g
a
t
h
e
r
i
n
g
o
f
o
b
s
e
r
v
a
b
l
e
a
n
d
m
e
a
s
u
r
a
b
l
e
e
v
i
d
e
n
c
e
,
a
n
d
a
c
o
n
c
e
r
n
w
i
t
h
o
b
j
e
c
t
i
v
e
a
n
d
r
o
b
u
s
t
e
x
p
e
r
i
m
e
n
t
a
l
p
r
o
c
e
d
u
r
e
s
B
u
r
e
a
u
c
r
a
t
i
c
e
v
a
l
u
a
t
i
o
n
l
o
g
i
c
e
v
a
l
u
a
t
i
o
n
i
s
r
o
o
t
e
d
i
n
i
d
e
a
l
s
o
f
r
a
t
i
o
n
a
l
p
l
a
n
n
i
n
g
,
w
i
t
h
a
s
t
r
o
n
g
f
o
c
u
s
o
n
c
o
m
p
l
e
x
,
s
t
e
p
-
b
y
-
s
t
e
p
p
r
o
c
e
d
u
r
e
s
,
t
h
e
l
i
m
i
t
i
n
g
o
f
d
e
v
i
a
t
i
o
n
s
f
r
o
m
s
u
c
h
p
r
o
c
e
d
u
r
e
s
,
a
n
d
a
n
a
l
y
s
i
s
o
f
t
h
e
a
c
h
i
e
v
e
m
e
n
t
o
f
i
n
t
e
n
d
e
d
o
b
j
e
c
t
i
v
e
s
L
e
a
r
n
i
n
g
e
v
a
l
u
a
t
i
o
n
l
o
g
i
c
e
v
a
l
u
a
t
i
o
n
p
r
i
v
i
l
e
g
e
s
a
n
o
p
e
n
n
e
s
s
t
o
c
h
a
n
g
e
a
n
d
t
h
e
u
n
e
x
p
e
c
t
e
d
,
t
h
e
i
n
c
o
r
p
o
r
a
t
i
o
n
a
n
d
c
o
n
s
i
d
e
r
a
t
i
o
n
o
f
a
w
i
d
e
r
a
n
g
e
o
f
v
i
e
w
s
a
n
d
p
e
r
s
p
e
c
t
i
v
e
s
,
a
n
d
a
f
o
c
u
s
o
n
l
a
y
r
a
t
h
e
r
t
h
a
n
p
r
o
f
e
s
s
i
o
n
a
l
e
x
p
e
r
t
i
s
e
Q
u
e
s
t
i
o
n
s
I
d
e
a
l
(
s
)
E
x
a
m
p
l
e
s
f
r
o
m
e
v
a
l
u
a
t
i
o
n
t
e
c
h
n
i
q
u
e
s
I
d
e
a
l
(
s
)
E
x
a
m
p
l
e
s
f
r
o
m
e
v
a
l
u
a
t
i
o
n
t
e
c
h
n
i
q
u
e
s
I
d
e
a
l
(
s
)
E
x
a
m
p
l
e
s
f
r
o
m
e
v
a
l
u
a
t
i
o
n
t
e
c
h
n
i
q
u
e
s
3
.
W
h
a
t
i
s
t
h
e
f
o
c
u
s
o
f
e
v
a
l
u
a
t
i
o
n
?
R
e
d
u
c
t
i
v
e
e
v
a
l
u
a
t
i
o
n
p
r
o
c
e
s
s
a
i
m
e
d
a
t
r
e
p
r
e
s
e
n
t
i
n
g
o
u
t
c
o
m
e
s
o
f
p
r
o
j
e
c
t
s
i
n
a
s
i
m
p
l
i

e
d
f
o
r
m
4
9
4
m
a
t
r
i
x
(
L
F
A
)
F
i
n
a
n
c
i
a
l
m
e
t
r
i
c
s
a
n
d
i
n
d
i
c
e
s
(
S
R
O
I
)
S
c
o
r
e
c
a
r
d
(
b
a
l
a
n
c
e
d
s
c
o
r
e
c
a
r
d
)
B
A
C
O
r
a
t
i
o
(
B
A
C
O
)
O
u
t
c
o
m
e
s
e
q
u
e
n
c
e
c
h
a
r
t
(
C
O
F
)
I
n
t
e
n
d
e
d
e
f
f
e
c
t
s
e
v
a
l
u
a
t
i
o
n
f
o
c
u
s
e
d
o
n
a
n
a
l
y
z
i
n
g
w
h
e
t
h
e
r
i
n
t
e
n
d
e
d
e
f
f
e
c
t
s
e
v
e
n
t
u
a
t
e
d
C
o
m
p
a
r
i
s
o
n
o
f
r
e
s
u
l
t
s
a
g
a
i
n
s
t
p
r
o
j
e
c
t
o
b
j
e
c
t
i
v
e
s
(
L
F
A
)
B
e
l
i
e
f
r
e
v
i
s
i
o
n
e
v
a
l
u
a
t
i
o
n
f
o
c
u
s
e
d
o
n
r
e
v
i
s
i
n
g
b
e
l
i
e
f
s
a
b
o
u
t
t
h
e
p
r
o
c
e
s
s
a
n
d
o
u
t
c
o
m
e
s
o
f
p
r
o
j
e
c
t
s
a
n
d
t
h
e
e
v
a
l
u
a
t
i
o
n
t
e
c
h
n
i
q
u
e
i
t
s
e
l
f

I
d
e
n
t
i
f
y
u
n
e
x
p
e
c
t
e
d
c
h
a
n
g
e
s

(
M
S
C
)
F
o
c
u
s
o
n

w
h
a
t
i
s
w
o
r
k
i
n
g
,
w
h
y
i
t
i
s
w
o
r
k
i
n
g
,
w
h
a
t
m
u
s
t
b
e
s
u
s
t
a
i
n
e
d
a
n
d
w
h
a
t
m
u
s
t
b
e
c
h
a
n
g
e
d

(
I
P
A
L
)
U
s
e
o
f
o
u
t
c
o
m
e
d
a
t
a

t
o
i
d
e
n
t
i
f
y
w
h
e
r
e
r
e
s
u
l
t
s
a
r
e
g
o
i
n
g
w
e
l
l
a
n
d
w
h
e
r
e
n
o
t
s
o
w
e
l
l

(
C
O
F
)

L
e
a
r
n
m
o
r
e
a
b
o
u
t
h
o
w
.
.
i
n
p
u
t
.
.
c
o
n
t
r
i
b
u
t
e
s
t
o
s
o
c
i
a
l
v
a
l
u
e
c
r
e
a
t
i
o
n

(
S
R
O
I
)
S
t
e
p
1
0
o
f
M
S
C
i
s

r
e
v
i
s
i
n
g
t
h
e
s
y
s
t
e
m

H
i
e
r
a
r
c
h
y
f
o
c
u
s
o
n
o
r
d
e
r
i
n
g
a
s
p
e
c
t
s
o
f
t
h
e
e
v
a
l
u
a
t
i
o
n
p
r
o
c
e
s
s
(
e
.
g
.
,
a
c
t
i
v
i
t
i
e
s
,
o
u
t
c
o
m
e
s
a
n
d
/
o
r
d
a
t
a
)
s
u
c
h
t
h
a
t
s
o
m
e
a
r
e
c
o
n
s
i
d
e
r
e
d
h
i
g
h
e
r
t
h
a
n
a
n
d
/
o
r
m
o
r
e
i
m
p
o
r
t
a
n
t
t
h
a
n
o
t
h
e
r
s

H
i
e
r
a
r
c
h
y
o
f
p
r
o
j
e
c
t
o
b
j
e
c
t
i
v
e
s

(
L
F
A
)

S
u
p
p
l
e
m
e
n
t
a
l
i
n
f
o
r
m
a
t
i
o
n
s
u
c
h
a
s
p
a
r
t
i
c
i
p
a
n
t
s
u
r
v
e
y
s

(
S
R
O
I
)
O
v
e
r
a
l
l

m
i
s
s
i
o
n

a
t
t
h
e

t
o
p

o
f
t
h
e
s
c
o
r
e
c
a
r
d
(
B
S
C
)

P
y
r
a
m
i
d
o
f
i
n
d
i
c
a
t
o
r
s

(
I
P
A
L
)
324 Voluntas (2014) 25:307336
1 3
T
a
b
l
e
2
c
o
n
t
i
n
u
e
d
S
c
i
e
n
t
i

c
e
v
a
l
u
a
t
i
o
n
l
o
g
i
c
e
v
a
l
u
a
t
i
o
n
e
c
h
o
e
s
t
h
e
s
c
i
e
n
t
i

c
m
e
t
h
o
d
,
w
i
t
h
a
s
t
r
o
n
g
f
o
c
u
s
o
n
s
y
s
t
e
m
a
t
i
c
o
b
s
e
r
v
a
t
i
o
n
,
g
a
t
h
e
r
i
n
g
o
f
o
b
s
e
r
v
a
b
l
e
a
n
d
m
e
a
s
u
r
a
b
l
e
e
v
i
d
e
n
c
e
,
a
n
d
a
c
o
n
c
e
r
n
w
i
t
h
o
b
j
e
c
t
i
v
e
a
n
d
r
o
b
u
s
t
e
x
p
e
r
i
m
e
n
t
a
l
p
r
o
c
e
d
u
r
e
s
B
u
r
e
a
u
c
r
a
t
i
c
e
v
a
l
u
a
t
i
o
n
l
o
g
i
c
e
v
a
l
u
a
t
i
o
n
i
s
r
o
o
t
e
d
i
n
i
d
e
a
l
s
o
f
r
a
t
i
o
n
a
l
p
l
a
n
n
i
n
g
,
w
i
t
h
a
s
t
r
o
n
g
f
o
c
u
s
o
n
c
o
m
p
l
e
x
,
s
t
e
p
-
b
y
-
s
t
e
p
p
r
o
c
e
d
u
r
e
s
,
t
h
e
l
i
m
i
t
i
n
g
o
f
d
e
v
i
a
t
i
o
n
s
f
r
o
m
s
u
c
h
p
r
o
c
e
d
u
r
e
s
,
a
n
d
a
n
a
l
y
s
i
s
o
f
t
h
e
a
c
h
i
e
v
e
m
e
n
t
o
f
i
n
t
e
n
d
e
d
o
b
j
e
c
t
i
v
e
s
L
e
a
r
n
i
n
g
e
v
a
l
u
a
t
i
o
n
l
o
g
i
c
e
v
a
l
u
a
t
i
o
n
p
r
i
v
i
l
e
g
e
s
a
n
o
p
e
n
n
e
s
s
t
o
c
h
a
n
g
e
a
n
d
t
h
e
u
n
e
x
p
e
c
t
e
d
,
t
h
e
i
n
c
o
r
p
o
r
a
t
i
o
n
a
n
d
c
o
n
s
i
d
e
r
a
t
i
o
n
o
f
a
w
i
d
e
r
a
n
g
e
o
f
v
i
e
w
s
a
n
d
p
e
r
s
p
e
c
t
i
v
e
s
,
a
n
d
a
f
o
c
u
s
o
n
l
a
y
r
a
t
h
e
r
t
h
a
n
p
r
o
f
e
s
s
i
o
n
a
l
e
x
p
e
r
t
i
s
e
Q
u
e
s
t
i
o
n
s
I
d
e
a
l
(
s
)
E
x
a
m
p
l
e
s
f
r
o
m
e
v
a
l
u
a
t
i
o
n
t
e
c
h
n
i
q
u
e
s
I
d
e
a
l
(
s
)
E
x
a
m
p
l
e
s
f
r
o
m
e
v
a
l
u
a
t
i
o
n
t
e
c
h
n
i
q
u
e
s
I
d
e
a
l
(
s
)
E
x
a
m
p
l
e
s
f
r
o
m
e
v
a
l
u
a
t
i
o
n
t
e
c
h
n
i
q
u
e
s
4
.
W
h
a
t
i
s
t
h
e
r
o
l
e
o
f
t
h
e
e
v
a
l
u
a
t
o
r
?
E
v
a
l
u
a
t
o
r
a
s

s
c
i
e
n
t
i
s
t

e
v
a
l
u
a
t
o
r
c
o
n
d
u
c
t
s
r
e
s
e
a
r
c
h
a
n
d
r
e
p
o
r
t
s
s
t
u
d
y

n
d
i
n
g
s
E
v
a
l
u
a
t
o
r
u
s
e
s
a

s
c
i
e
n
t
i

c
a
p
p
r
o
a
c
h

(
L
F
A
)
E
v
a
l
u
a
t
o
r
a
s

r
e
s
e
a
r
c
h
e
r

(
S
R
O
I
)
E
v
a
l
u
a
t
o
r
a
s

i
m
p
l
e
m
e
n
t
e
r

e
v
a
l
u
a
t
o
r
e
n
s
u
r
e
s
t
h
a
t
t
h
e
s
p
e
c
i

e
d
e
v
a
l
u
a
t
i
o
n
p
r
o
c
e
s
s
i
s
a
d
h
e
r
e
d
t
o
E
n
s
u
r
e
a
c
t
i
v
i
t
i
e
s
a
r
e
c
o
r
r
e
c
t
l
y
c
a
t
e
g
o
r
i
z
e
d
a
n
d
i
n
d
i
c
a
t
o
r
s
a
r
e
v
e
r
i

e
d
(
L
F
A
)
E
n
s
u
r
e
s
t
a
g
e
s
a
r
e
p
r
o
p
e
r
l
y
c
o
m
p
l
e
t
e
d
t
h
r
o
u
g
h
u
s
e
o
f
c
h
e
c
k
l
i
s
t
s
(
S
R
O
I
)
E
v
a
l
u
a
t
o
r
a
s

f
a
c
i
l
i
t
a
t
o
r

e
v
a
l
u
a
t
o
r
h
e
l
p
s
o
t
h
e
r
s
t
o
p
a
r
t
i
c
i
p
a
t
e
i
n
t
h
e
e
v
a
l
u
a
t
i
o
n
p
r
o
c
e
s
s

C
h
a
m
p
i
o
n

w
h
o

f
a
c
i
l
i
t
a
t
e
s
s
e
l
e
c
t
i
o
n
o
f
S
C
s
t
o
r
i
e
s

e
n
c
o
u
r
a
g
e
p
e
o
p
l
e

m
o
t
i
v
a
t
e
p
e
o
p
l
e

a
n
s
w
e
r
q
u
e
s
t
i
o
n
s

(
M
S
C
)
O
r
g
a
n
i
z
a
t
i
o
n
s
e
n
c
o
u
r
a
g
e
d
t
o
u
s
e
m
e
t
h
o
d
i
n
a
w
a
y
t
h
a
t

s
u
i
t
s
t
h
e
i
r
n
e
e
d
s
a
n
d
c
o
n
t
e
x
t

(
I
P
A
L
)
B
A
C
O
b
e
s
t
a
v
a
i
l
a
b
l
e
c
h
a
r
i
t
a
b
l
e
o
p
t
i
o
n
,
B
S
C
b
a
l
a
n
c
e
d
s
c
o
r
e
c
a
r
d
,
C
O
F
c
o
m
m
o
n
o
u
t
c
o
m
e
f
r
a
m
e
w
o
r
k
,
I
P
A
L
i
m
p
a
c
t
p
l
a
n
n
i
n
g
,
a
s
s
e
s
s
m
e
n
t
a
n
d
l
e
a
r
n
i
n
g
,
L
F
A
l
o
g
i
c
a
l
f
r
a
m
e
w
o
r
k
,
M
S
C
m
o
s
t
s
i
g
n
i

c
a
n
t
c
h
a
n
g
e
,
S
R
O
I
s
o
c
i
a
l
r
e
t
u
r
n
o
n
i
n
v
e
s
t
m
e
n
t
Voluntas (2014) 25:307336 325
1 3
characterized by detachment and scientic rigor (Marsden and Oakley 1991). In the
context of specic evaluation approaches, the LFA stresses that indicators must be
veried objectively, and SROI states that evaluators should be independent and
objective researchers.
Under a scientic evaluation logic an ideal evaluation process is also anti-
conict. Here, evaluations should be designed and conducted so as to avoid, as far
as possible, any conict amongst evaluators and others involved in the process.
There is a belief that evaluation methods can be value-neutral and that they should
de-emphasize or ignore the political processes involved in evaluation (Marsden and
Oakley 1991; Jacobs et al. 2010). This links strongly to ideals of objectivity, in that
one way to avoid conict is to ensure the objectivity of data and of evaluators. The
LFA exemplies the ideal of anti-conict, with its explicit desire to move away
from evaluation as an adversarial process.
A further ideal of the scientic evaluation logic is to be reductive. That is, the
tendency of an evaluation approach to represent project outcomes in a simplied
form. Being reductive involves the use of maps, models, matrices, and numerical
calculations to represent objects in social development projects (Crewe and
Harrison 1998; Scott 1998). A fundamental feature of the LFA is the construction of
a 4 9 4 matrix to represent the project and its outcomes, which provides a short and
convenient summary of a project, simplifying complex social situations and making
them relatively easy to understand (Jacobs et al. 2010). In SROI, whilst there is
concern with presenting case studies and background data, the core of the approach
is the calculation of metrics and indices to represent the social return of the project.
The use of monetization in SROI can reduce the complex information about third
sector organizations into data that can easily be compared and valued (Lingane and
Olsen 2004). The focus on reduction is also evident in several other techniques, such
as the production of a scorecard using the balanced scorecard (Kaplan 2001), the
calculation of the BACO ratio (Acumen Fund 2007) and the development of an
outcome sequence chart in the Common Outcomes Framework (Urban Institute
2006).
Finally, under a scientic evaluation logic the evaluator is conceived of as a
scientist, that is, an actor who conducts experiments, engages in research and
reports ndings. This is exemplied in the LFA, where evaluators use a scientic
approach, and in SROI, where evaluators take on the role and label of researcher.
Bureaucratic Evaluation Logic
The bureaucratic evaluation logic is rooted in ideals of rational planning, with a
strong focus on complex, step-by-step procedures, the limiting of deviations from
such procedures, and analysis of the achievement of intended objectives. Its ideals
are those of categorization, sequential, intended effects, and hierarchy, and the
evaluators role is that of an implementer. An ideal of the bureaucratic evaluation
logic is that of categorization. The most important feature of categorization is not
so much the use of categories per se, but that those categories are pre-dened and
thus standardized across all evaluations. In effect, categorization means that the
project is mapped to the demands of the evaluation approach, rather than each
326 Voluntas (2014) 25:307336
1 3
project creating a (at least somewhat) unique set of categories. This resonates with
the way in which categorization is considered a key social process of bureaucracy
(Stark 2009). The LFA exemplies the ideal of categorization, with project
activities translated into four pre-dened categories: inputs, outputs, purpose and
goal. The SROI also resonates with the categorization ideal, as the value generated
by projects is translated into three pre-existing forms: economic value, socio-
economic value and social value. Categorization is also a feature of several other
techniques, such as the Balanced Scorecard with its use of four pre-dened
perspectives, the development of a set of common outcome indicators in the
Common Outcome Framework (Urban Institute 2006), and the translation of social
value into the categories of total output and social impact in determining the BACO
ratio (Acumen Fund 2007).
With its roots in rational planning, the bureaucratic evaluation logic emphasizes a
sequential evaluation process. That is, evaluations should proceed according to a
step-by-step process, and, more importantly, it is necessary to complete each stage
before proceeding to the next, which corresponds to a perspective on social
development as a linear process (Crewe and Harrison 1998; Wallace et al. 2007;
Howes 1992). In the context of SROI, NEF (2007) presents the 10 stages of an SROI
analysis, with each stage following from the completion of all prior stages, and
evaluators are provided with checklists to ensure that stages are complete before
moving to the next stage. In contrast, the MSC approach states that only steps four,
ve, and six (of 10) are essential, with the possibility of excluding other steps if not
deemed necessary by the organizational context and/or reasons for using MSC.
A bureaucratic evaluation logic also privileges the analysis of intended effects,
i.e., whether or not the effects of the project that were envisioned prior to its
completion did in fact eventuate. Here, evaluation is focused on the attainment of
pre-determined goals (Howes 1992) and is often associated with a downgrading of
the achievement of unintended effects, whether good or bad (Gasper 2000). The
focus on a limited set of outcomes can mean that the true complexity of a program is
frequently ignored in the information production process (Blalock 1999). The LFA
exemplies this ideal, with its establishment of objectives and indicators at the
planning stage of projects in order to compare actual outcomes with those plans.
Hierarchy is a further ideal of the bureaucratic evaluation logic. This ideal
involves the creation of a ranking amongst aspects of the evaluation process, such as
project activities, results of the project and/or the types of information to be used in the
evaluation, such that some features are considered higher than and/or more important
than others. In the context of specic evaluation approaches, hierarchy can be
explicit or implicit. For example, in the LFA, there is an explicit hierarchy of project
objectives, where achieving what is intended at one level leads to the next one higher
up and so on until the nal and ultimate goal is reached (Gasper 2000). In the SROI,
there is an implicit hierarchy of forms of data, whereby participant surveys and other
qualitative data are supplemental to other more quantitative forms of data. The
Balanced Scorecard also creates a hierarchy, with the overall mission at the top of
the scorecard (Kaplan 2001), and the Impact Planning, Assessment and Learning
method creates a pyramid of indicators with high-level outcome indicators at the
top and local programme level indicators at the bottom (Keystone, undated).
Voluntas (2014) 25:307336 327
1 3
Finally, the ideal evaluator under a bureaucratic logic is that of the implemen-
ter. The evaluators role is to ensure that the evaluation proceeds, as far as possible,
according to the specied methodology. The evaluator is limited to the collection of
data, providing a technically competent and politically neutral expert in order that
appropriate information is provided to the decision makers (Abma 1997). The
evaluator in an LFA is responsible for ensuring that activities are correctly
categorized and that indicators are both objective and veriable. In the SROI, the
evaluator ensures that the stages of analysis are adhered to and that all necessary
activities have been undertaken via the use of checklists.
Learning Evaluation Logic
The learning evaluation logic privileges an openness to change and the unexpected,
the incorporation and consideration of a wide ranges of views and perspectives, and
a focus on lay rather than professional expertise. Its ideals are those of richness,
belief revision, and egalitarianism, and the evaluators role is that of a facilitator.
An important ideal of the learning evaluation logic is that of richness, where the
evaluation process privileges analysis of the fullest possible range of and variation
in project effects. Here, evaluation and the social development process recognize the
importance of narratives, which are never nal products but are always in a state of
becoming (Conlin and Stirrat 2008). Similarly, richness invites a rejection of
performance measures that can capture but only a small fraction of what is
important and invites a focus on the overall story, textures and nuances that can
reveal the multiple levels of human experience (Greene 1999). Such an approach
helps to guard against the so-called context-stripping, that is, evaluation as though
context did not exist but only under carefully controlled conditions (Guba and
Lincoln 1989). The central feature of the MSC approach exemplies the ideal of
richness, with its focus on telling stories of events and providing what are called
thick descriptions. Richness is also clearly evident in the Impact Planning,
Assessment and Learning method, where it recognizes that change can be uncertain
and unpredictable and that the impact of third sector organizations can be intended
or unintended, positive or negativeand often both together (Keystone undated,
p. 3). To some extent, the SROI approach also attempts to convey a rich picture
through the production of an SROI report and case studies of participants
experiences, and the Balanced Scorecard focuses on providing a multidimensional
framework for measuring and managing nonprot effectiveness (Kaplan 2001,
p. 357).
Belief revision is a further ideal of the learning evaluation logic, where
evaluation is focused on uncovering the unexpected and on searching for deviations.
The object of belief revision can take several forms, such as how the project was
conducted and looking for ways to improve, examining what didnt work, what was
not achieved, and why, and analysis of the evaluation process itself and how it could
be rened. In this way, the ideal of belief revision has an outlook that is orientated
towards the future, where analysis of what has already happened is premised on
making changes to future plans and activities. Here, the evaluation process can
construct self-reective moments that allow individuals to examine the realities they
328 Voluntas (2014) 25:307336
1 3
confront, which creates an atmosphere of continual learning (Campos et al. 2011).
Similarly, evaluation can involve generating knowledge to learn and change
behavior, whereby evaluation systems help to improve programs by examining and
sharing successes as well as failures through engagement with stakeholders at all
levels (Ebrahim 2005). Aspects of the MSC approach are focused squarely on belief
revision, such as the emphasis on identifying unexpected changes that result from
projects, and an explicit concern with analyzing how the MSC process itself could
be revised. The Impact Planning, Assessment and Learning method also embodies
belief revision, with its focus on developing learning relationships and the use of
reection to examine what is working, why it is working, what must be sustained
and what must be changed (Keystone undated, pp. 3, 10). Similarly, the Common
Outcome Framework suggests that outcome data should be used to identify where
results are going well and where not so wellthis process is what leads to
continuous program learning (Urban Institute 2006, p. 15). The SROI approach
also embodies elements of this ideal, with a concern on using SROI analysis as a
means to learn more about how projects contribute to social value creation.
The learning evaluation logic is also egalitarian in nature, with an explicit
concern in ensuring that the evaluation process and its associated techniques are
readily understandable to a wide range of stakeholders. There should be minimal (if
any) need for training in specialized techniques or abstract concepts, and experts are
not required. The MSC approach is, to a large degree, built on the egalitarian ideal,
with its use of story-telling, a widely used and even intrinsically human practice,
and the insistence that the approach itself should require no special professional
skills. The Impact Planning, Assessment and Learning method also focuses on the
use of dialogue techniques and change journals in which staff record the informal
feedback and changes that they observe in their daily work (Keystone undated,
p. 9). In contrast, the language and approach of techniques like the LFA are often
experienced by eld staff as alienating, confusing and culturally inappropriate
(Wallace et al. 2007). In addition, the use of techniques like random assignment and
matched comparison groups are exceedingly difcult to implement within most
third sector settings, particularly given the level of resources usually available for
evaluation (Easterling 2000). This is evident in the Common Outcome Framework
whereby there is criticism of classic program evaluation and its focus on lengthy
academic evaluations and complex, meaningless statistical analyses (Urban
Institute 2006, p. 3).
The ideal evaluator under a learning evaluation logic is that of the facilitator.
The evaluators role is to help others to participate in the evaluation process, and to
make their tasks as easy as possible. This role has strong links to wider participation
and empowerment discourses, whereby outsiders with expert technical skills
should relinquish control and serve rather to facilitate a process of learning and
development (Howes 1992). Under this perspective, the evaluator works actively to
develop a transactional relationship with the respondents on the basis of
participation (Abma 1997) and can prepare an agenda for negotiation between
stakeholders, taking on the role of a moderator (Guba and Lincoln 1989). In the
MSC approach, the evaluators task is to encourage people to write stories, to help
with their selection, and to motivate and inspire others during the evaluation
Voluntas (2014) 25:307336 329
1 3
process. In the Impact Planning, Assessment and Learning method there is also a
focus on facilitating the involvement of constituents, and exibility is emphasized
with organizations encouraged to use the method in a way that suits their needs and
context (Keystone undated, p. 4).
Discussion
In this paper I have provided a preliminary sketch of the types of logics of
evaluation in the third sector, namely, the scientic, bureaucratic, and learning
evaluation logics. The evaluation logics are akin to ideal-types (Weber 1904/1949)
in that they serve to highlight the types of beliefs and rules that structure the practice
of evaluation in the third sector. In this way, although the evaluation logics were
developed from analysis of particular evaluation practices, they will not correspond
to all the features of any particular evaluation approach. Indeed, a specic
evaluation practice can align with ideals from different evaluation logics, as is
shown in Table 2, where characteristics of SROI align with ideals from each of the
three evaluation logics.
Analysis of different evaluation logics indicates that many debates, such as
conicts over the use of different forms of data (quantitative vs. qualitative data
being a common one) are, at least to some extent, manifestations of disagreements
about what constitutes an ideal evaluation process. This is important because many
ideals of the three evaluation logics sketched here may be potentially incompatible,
and it is these situations that are rife for contestation over which evaluation practices
are most appropriate.
The scope for conict becomes particularly evident through a comparison of how
the ideals of the different evaluation logics address the four evaluation questions
(see Table 2). In relation to what makes a quality evaluation, the logics differ
considerably in their responses. A scientic evaluation logic values proof and
accounting for alternative explanations, a bureaucratic logic values mapping projects
to pre-dened categories, and a learning evaluation logic values richness and
analysis of variation in project effects. Here, the ideals proof and richness may be
compatible, whereby an evaluation approach employs both indicators and case
studies, such as SROIs attempt to combine a focus on metrics with case studies of
participants experiences. However, SROI has been criticized for overly focusing on
nancial value at the expense of a fuller and more rounded understanding of project
effects (Durie et al. 2007). Categorization appears fundamentally incompatible with
richness, as it is the translation of activities and effects into pre-dened categories
that tends to preclude analysis of their nuances and variations, a criticism that has
been made of the LFA (Howes 1992).
The ideals that characterize a good evaluation process also differ, where a
scientic evaluation logic values an objective process that is free of conict, a
bureaucratic evaluation logic values a sequential process, whereas a learning
evaluation logic is concerned that the process be egalitarian. The use of dialogue
and story-telling in particular is likely to generate conict, because although its
requirement of little expertise resonates with the egalitarian ideal, such a practice is
330 Voluntas (2014) 25:307336
1 3
likely to clash with the ideal of objectivity because stories are subjective
experiences. This is evident in criticisms directed at the MSC regarding the bias
towards good-news stories (Dart and Davies 2003a, b). There have been attempts to
combine a focus on sequential processes with an egalitarian framework, such as
linking elements of the logical framework with greater stakeholder involvement in
project planning and selection of project objectives (e.g., SIDA 2006). However,
this seems difcult in practice, for example, where the LFA is typically criticized
for placing the evaluation expert in a privileged position (Howes 1992).
Regarding the focus of evaluation, a scientic evaluation logic is aimed at
reducing and simplifying outcomes, a bureaucratic evaluation logic values both an
analysis of intended effects coupled with the creation of hierarchies, and a learning
evaluation logic is concerned with belief revision. In some respects, these ideals are
compatible in that being reductive need not preclude belief revision, as illustrated in
the SROI approach whereby SROI ratios can be used to learn more about social
value creation. Conversely, techniques like MSC have been criticized for not being
able to produce summary information to judge the overall performance of a
programme (Dart and Davies 2003a, b). Reconciling the ideals of intended effects
and belief revision may also be difcult, for example, as the LFA has been criticized
for creating a lock-frame that blocks opportunities for learning and adaptation
(Gasper 2000).
An understanding of these different evaluation logics thus reveals that they
privilege different kinds of knowledge and the desired process for knowledge
generation. This highlights the role of different epistemologies in debates about the
merits of different evaluation practices, and thus recognition of the way in which
different evaluation logics can have important implications for whose knowledge
and interests are considered more legitimate in third sector organizations (c.f.,
Ebrahim 2002; Greene 1999). This is critical in an arena where evaluations provide
an important basis from which third sector organizations seek to establish and
maintain their legitimacy in the eyes of different stakeholders (Ebrahim 2002, 2005;
Enjolras 2009).
Implications and Conclusion
This study has shown that developing an understanding of evaluation logics is
important given the lack of theorization and conceptual framing in research on
performance measurement and evaluation in the third sector (Ebrahim and Rangan
2011). An examination of evaluation logics helps to go beyond a rst-degree level
understanding of evaluation techniques by highlighting the normative properties of
different evaluation approaches. To this end, the paper contributes to the literature
by providing a framework that can be used to dissect both existing and proposed
evaluation techniques. As shown in the paper, Table 1 can be used to understand the
characteristics of different evaluation techniques according to a set of common
factors, such as the material outputs, the stated purpose of the evaluation technique,
and the type and extent of expertise. The three evaluation logics and their associated
ideals, as outlined in Table 2, can be used to analyze the normative properties of
Voluntas (2014) 25:307336 331
1 3
specic evaluation techniques. Collectively, this framework can help to develop a
more conceptual analysis of evaluation practices in the third sector.
The framework also has implications for disputes that can arise over different
evaluation approaches. In particular, the evaluation logics can help to determine
whether debates over the merits of particular evaluation approaches (e.g., use of
qualitative and quantitative data or subjective/objective procedures) are more
methodological in nature (e.g., disagreements about the validity with which
particular data was collected and analyzed) or driven more by a particular
evaluation ideal or position (e.g., a belief in the superiority of particular forms of
data or methods of data collection). The ability to differentiate better between
methodological and ideological critiques may go some way towards exposing the
nature of the viewpoints advanced by particular evaluation techniques and/or
experts, and thus whether such disagreements can be resolved.
5
For example, a
methodological critique could be addressed by changing the evaluation technique to
improve its validity, whereas an ideological critique is more likely to prove
intractable even in the face of adaptations to the evaluation methodology.
The analysis of different evaluation logics also reveals that they can have
important practical implications. In particular, generating evaluations that can
accommodate the ideals of different stakeholders may be very difcult, and require
careful design such that, even within a single evaluation, the ideals of different
evaluation logics can be accommodated as far as possible. Alternatively, recognition
of different logics of evaluation can also provide scope for third sector organizations
to push back against demands for specic types of evaluation, particularly those
from funders, by highlighting that the evaluation logics of such demands are
potentially fundamentally inconsistent with evaluation logics embodied in estab-
lished evaluation practices.
A related practical consideration is that of cost, particularly pertinent when
funding for evaluation activities is limited or non-existent. Although often not
explicitly addressed, the resource implications that stem from different evaluation
logics are likely to be of critical importance to third sector managers, and,
indirectly, for funders themselves. For example, conducting evaluations to satisfy
the ideal of proof under the scientic evaluation logic has been recognized as
complex and expensive, as reected in the World Banks development of guidance
to conduct impact evaluations in the presence of budget, time and data constraints
(World Bank 2006). In contrast, more egalitarian approaches are likely to require
less expertise and thus, from a cost perspective, can have distinct advantages. These
differing cost implications of the evaluation logics can, at least in part, be traced to
the origins of specic evaluation techniques. For example, elements of the scientic
and bureaucratic evaluation logics are located in techniques typically developed by
funders and donors (e.g., USAID and the LFA, REDF and SROI), a setting in which
the required money and expertise is perhaps more readily available and forthcom-
ing. In contrast, techniques that map very closely to the learning evaluation logic,
like the MSC, were developed in and for third sector organizations themselves (e.g.,
see Davies 1998), and, perhaps not surprisingly, appear more carefully attuned to
5
I thank an anonymous reviewer for suggesting this line of reasoning.
332 Voluntas (2014) 25:307336
1 3
the demands that evaluation techniques can place on resources and expertise. In
general, an analysis of evaluation logics can help to reveal the differing resource
implications that can follow from advocating (or even requiring) particular
approaches to evaluation in third sector organizations.
A nal practical implication concerns the knowledge and skills required of
evaluators. An evaluator as scientist is likely to require knowledge of the scientic
method and skills in quantitative data collection, experimental procedures and the
writing up of research ndings, whereas an evaluator as facilitator needs knowledge of
forms of qualitative inquiry and skills in communication, interpersonal interactions and
mediation between groups with different interests/values. This can have important
consequences for the professional status of the practitioners, consultants, and policy
makers that contribute to and/or are involvedinevaluations inthirdsector organizations.
Using an analysis of the LFA, SROI, and MSC evaluation techniques, this study
developed three different logics of evaluation in the third sector. This raises several
questions for future research. The rst question concerns the way in which these
evaluation logics relate to evaluation practice, that is, to what extent are the three
evaluation logics illustrative of other evaluation techniques in the third sector? A
second and related question is: are there other logics of evaluation in the third
sector? A third and more broad-ranging question is: how do evaluation logics in the
third sector relate to evaluation logics more generally? In this regard, the three
evaluation logics appear to relate to more general evaluation approaches. For
example, the scientic and bureaucratic evaluation logics have afnity with what
Guba and Lincoln (1989) characterize as rst (measurement) and second
(description) generation evaluation approaches, whereas the learning evaluation
logic appears to resonates with what they term fourth generation evaluation.
Future research could fruitfully explore these connections.
Through an analysis of specic evaluation techniques, this study has sought to
highlight the normative ideals of different evaluation approaches, and to provide a
preliminary sketch of the types of evaluation logics in the third sector. There is much
scope for further research to rene the conceptualization of the logics and/or develop
additional ones, to examine the ways in which other evaluation techniques reect and
seek to reconcile potentially conicting evaluation ideals, and to consider the
important role that different evaluation logics can play in promoting or discrediting
particular types of evaluation information and expertise in third sector organizations.
Acknowledgments The author thanks the anonymous reviewers, Emily Barman, Alnoor Ebrahim,
David Lewis, and participants at the 2011 Impact of evaluations in the third sector: values, structures and
relations conference and the Rationalization and professionalization of the nonprot sector track at
EGOS 2012.
References
Abma, T. A. (1997). Playing with/in plurality: revitalizing realities and relationships in Rotterdam.
Evaluation, 3, 2548.
Acumen Fund (2007). The Best Available Charitable Option. Retrieved September 19, 2011 from
http://www.acumenfund.org/uploads/assets/documents/BACO%20Concept%20Paper%20nal_
B1cNOVEM.pdf.
Voluntas (2014) 25:307336 333
1 3
Bagnoli, L., & Megali, C. (2011). Measuring performance in social enterprises. Nonprot and Voluntary
Sector Quarterly, 40, 149165.
Barman, E. (2007). What is the bottom line for third sector organizations? A history of measurement in
the British voluntary sector. VOLUNTAS International Journal of Voluntary and Nonprot
Organizations, 18, 101115.
Benjamin, L. M. (2008). Account space: How accountability requirements shape nonprot practice.
Nonprot and Voluntary Sector Quarterly, 37, 201223.
Blalock, A. N. (1999). Evaluation research and the performance management movement: from
estrangement to useful integration? Evaluation, 5, 117149.
BOND. (2003). Logical framework analysis. Guidance notes No.4.
Bouchard, J. M. (2009a). The worth of the social economy. In J. M. Bouchard (Ed.), The worth of the
social economy: An international perspective (pp. 1118). Brussels: P.I.E. Peter Lang.
Bouchard, J. M. (2009b). The evaluation of the social economy in Quebec, with regards to stakeholders,
mission and organizational identity. In J. M. Bouchard (Ed.), The worth of the social economy: An
international perspective (111132). Brussels: P.I.E. Peter Lang.
Campos, J. G., Andion, C., Serva, M., Rossetto, A., & Assumpe, J. (2011). Performance evaluation in
non-governmental organizations (NGOs): An analysis of evaluation models and their applications in
Brazil. VOLUNTAS International Journal of Voluntary and Nonprot Organizations, 22, 238258.
Carman, J. G. (2007). Evaluation practice among community-based organizations: research into the
reality. American Journal of Evaluation, 28, 6075.
Charities Evaluation Service. (2008). Accountability and learning: Developing monitoring and evaluation
in the third sector. London: Charities Evaluation Service.
Clear Horizons. (2009). Quick start guide: MSC design. Retrieved November 1, 2012 from http://www.
clearhorizon.com.au/wp-content/uploads/2009/02/quick-start_feb09.pdf.
Conlin, S., & Stirrat, R. L. (2008). Current challenges in development evaluation. Evaluation, 14,
193208.
Crewe, E., & Harrison, E. (1998). Whose development? An ethnography of aid. London: Zed Books.
Dart, J., & Davies, R. (2003a). A dialogical, story-based evaluation tool: The most signicant change
technique. American Journal of Evaluation, 24, 137155.
Dart, J., & Davies, R. (2003b). Quick-start guide: A self-help guide for implementing the most signicant
change technique (MSC). Retrieved November 1, 2012 from http://www.clearhorizon.com.au/wp-
content/uploads/2008/12/dd-2003-msc_quickstart.pdf.
Davies, R. (1998). An evolutionary approach to organisational learning: an experiment by an NGO in
Bangladesh. In D. Mosse, J. Farrington & A. Few (Eds.), Development as process: Concepts and
methods for working with complexity (pp. 6883). London: Routledge.
Davies, R., & Dart, J. (2005). The most signicant change (MSC) technique: A guide to its use.
Retrieved November 1, 2012 from http://www.mande.co.uk/docs/MSCGuide.pdf.
Department for International Development (DFID). (2009). Guidance on using the revised logical
framework. DFID Practice Paper.
Durie, S., Hutton, E., & Robbie, K. (2007). Investing in Impact: Developing Social Return on Investment.
Retrieved September 14, 2012 from http://www.socialimpactscotland.org.uk/media/1200/SROI-%
20investing%20in%20impact.pdf.
Easterling, D. (2000). Using outcome evaluation to guide grantmaking: Theory, reality, and possibilities.
Nonprot and Voluntary Sector Quarterly, 29, 482486.
Ebrahim, A. (2002). Information struggles: The role of information in the reproduction of NGO-funder
relationships. Nonprot and Voluntary Sector Quarterly, 31, 84114.
Ebrahim, A. (2005). Accountability myopia: Losing sight of organizational learning. Nonprot and
Voluntary Sector Quarterly, 34, 5687.
Ebrahim, A., & Rangan, V. K. (2011). Performance measurement in the social sector: a contingency
framework. Social Enterprise Initiative, Harvard Business School, working paper.
Eckerd, A., & Moulton, S. (2011). Heterogeneous roles and heterogeneous practices: Understanding the
adoption and uses of nonprot performance evaluations. American Journal of Evaluation, 32, 98117.
Eme, B. (2009). Miseries and worth of the evaluation of the social and solidarity-based economy: For a
paradigm of communicational evaluation. In J. M. Bouchard (Ed.), The worth of the social economy:
An international perspective (pp. 6386). Brussels: P.I.E. Peter Lang.
Enjolras, B. (2009). The public policy paradox. Normative foundations of social economy and public
policies: Which consequences for evaluation strategies? In J. M. Bouchard (Ed.), The worth of the
social economy: An international perspective (pp. 4362) Brussels: P.I.E. Peter Lang.
334 Voluntas (2014) 25:307336
1 3
Fine, A. H., Thayer, C. E., & Coghlan, A. T. (2000). Program evaluation practice in the nonprot sector.
Nonprot Management and Leadership, 10(3), 331339.
Fowler, A. (2002). Assessing NGO performance: Difculties, dilemmas and a way ahead. In M. Edwards
& A. Fowler (Eds.), The earthscan reader on NGO management (pp. 293307). London: Earthscan.
Friedland, R., and Alford, R. R. (1991). Bringing society back in: Symbols, practices, and institutional
contradictions. In W. W. Powell & P. J. DiMaggio (Eds.), The new institutionalism in organizational
analysis (pp. 232266). Chicago: University of Chicago Press.
Gasper, D. (2000). Evaluating the logical framework approach: Towards learning-oriented development
evaluation. Public Administration and Development, 20, 1728.
Greene, J. C. (1999). The inequality of performance measurements. Evaluation, 5, 160172.
Guba, E. G., & Lincoln, Y. S. (1989). Fourth generation evaluation. Newbury Park: Sage Publications.
Hoefer, R. (2000). Accountability in action? Program evaluation in nonprot human service agencies.
Nonprot Management and Leadership, 11(2), 167177.
Howes, M. (1992). Linking paradigms and practice: Key issues in the appraisal, monitoring and
evaluation of British NGO projects. Journal of International Development, 4, 375396.
Keystone (undated). Impact Planning, Assessment and Learning. Retrieved September 19, 2011 from
http://www.keystoneaccountability.org/sites/default/les/1%20IPAL%20overview%20and%20service%
20offering_0.pdf.
Jacobs, A., Barnett, C., & Ponsford, R. (2010). Three approaches to monitoring: Feedback systems,
participatory monitoring and evaluation and logical frameworks. IDS Bulletin, 41, 3644.
Kaplan, R. S. (2001). Strategic performance measurement and management in third sector organizations.
Nonprot Management and Leadership, 11(3), 353371.
LeRoux, K., & Wright, N. S. (2011). Does performance measurement improve strategic decision making?
Findings from a national survey of nonprot social service agencies. Nonprot and Voluntary Sector
Quarterly (in press).
Lingane, A., & Olsen, S. (2004). Guidelines for social return on investment. California Management
Review, 46, 116135.
Lounsbury, M. (2008). Institutional rationality and practice variation: new directions in the institutional
analysis of practice. Accounting, Organizations and Society, 33, 349361.
Marquis, C., & Lounsbury, M. (2007). Vive la resistance: Competing logics and the consolidation of U.S.
community banking. Academy of Management Journal, 50, 799820.
Marsden, D., & Oakley, P. (1991). Future issues and perspectives in the evaluation of social development.
Community Development Journal, 26, 315328.
McCarthy, J. (2007). The ingredients of nancial transparency. Nonprot and Voluntary Sector
Quarterly, 36, 156164.
New Economics Foundation (NEF) (2007). Measuring real value: A DIY guide to social return on
investment. London: New Economics Foundation.
New Economics Foundation (NEF) (2008). Measuring value: A guide to social return on investment
(SROI) (2nd ed.). London: New Economics Foundation.
New Philanthropy Capital (NPC). (2010). Social return on investment. Position paper. Retrieved
November 1, 2012 from http://www.thinknpc.org/publications/social-return-on-investment-position
-paper/.
Nicholls, A. (2009). We do good things, dont we? Blend value accounting in social entrepreneurship.
Accounting, Organizations and Society, 34, 755769.
Ofce of the Third Sector (UK Cabinet Ofce). (2009). A guide to social return on investment. London:
Ofce of the Third Sector.
Olsen, S., & Nicholls, J. (2005). A framework for approaches to SROI analysis. Retrieved November 1,
2012 from http://ccednet-rcdec.ca/sites/ccednet-rcdec.ca/les/ccednet/pdfs/2005-050624_SROI_
Framework.pdf.
Porter, M. E., & Kramer, M. R. (1999). Philanthropys new agenda: creating value. Harvard Business
Review, November/December, 121130.
Reed, E., & Morariu, J. (2010). State of evaluation 2010: Evaluation practice and capacity in the
nonprot sector. Innovation Network. Retrieved November 1, 2012 from http://www.innonet.org/
client_docs/innonet-state-of-evaluation-2010.pdf.
Roberts Enterprise Development Fund (REDF) (2001). Social return on investment methodology:
Analyzing the value of social purpose enterprise within a social return on investment framework.
Retrieved November 1, 2012 from http://www.redf.org/learn-from-redf/publications/119.
Voluntas (2014) 25:307336 335
1 3
Roche, C. (1999). Impact assessment for development agencies: learning to value change. Oxford: Oxfam
GB.
Rosenberg, L. J., & Posner, L. D. (1979). The logical framework: a managers guide to a scientic
approach to design and evaluation. Washington DC: Practical Concepts.
Scott, J. C. (1998). Seeing like a state: How certain schemes to improve the human condition have failed.
New Haven: Yale University Press.
SIDA. (2004). The logical framework approach: A summary of the theory behind the LFA method.
Stockholm: SIDA.
SIDA. (2006). Logical framework approachwith an appreciative approach. Stockholm: SIDA.
Stark, D. (2009). The sense of dissonance: Accounts of worth in economic life. Princeton: Princeton
University Press.
Urban Institute. (2006). Building a Common Outcome Framework to Measure Nonprot Performance.
Retrieved September 19, 2011 from http://www.urban.org/UploadedPDF/411404_Nonprot_Per
formance.pdf.
W. K. Kellogg Foundation. (2004). Logic model development guide: Using logic models to bring together
planning, evaluation and action. Battle Creek: W. K. Kellogg Foundation.
Wallace, T., Bornstein, L., & Chapman, J. (2007). The aid chain: Coercion and commitment in
development NGOs. Rugby, UK: Practical Action Publishing.
Waysman, M., & Savaya, R. (1997). Mixed method evaluation: A case study. American Journal of
Evaluation, 18, 227237.
Weber, M. (1904/1949). The methodology of the social sciences (E. A. Shils & H. A. Finch (Eds. and
Trans.). Free Press: New York.
World Bank. (2006). Conducting quality impact evaluations under budget, time and data constraints.
Washington, DC: Independent Evaluation Group.
World Bank. (undated). The LogFrame handbook: A logical framework approach to project cycle
management. Retrieved November 1, 2012 from http://www.wau.boku.ac.at/leadmin/_/H81/H811/
Skripten/811332/811332_G3_log-framehandbook.pdf.
336 Voluntas (2014) 25:307336
1 3

Vous aimerez peut-être aussi