Vous êtes sur la page 1sur 257

New concepts New challenges New solutions

ABSTRACT BOOK
W W W. E U R O P E A N E VA L U AT I O N . O R G

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

Strand 1 Evaluation governance, networks and information Strand 2 Evaluation research, methods and practices Strand 3 Evaluation ethics, capabilities and professionalism Strand 4 Evaluation of regional, social and development programs and policies Strand 5 Evaluation in government and in organizations

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

Contents
Oral Presentations S5-03 S2-26 S2-41 S5-11 S2-04 S3-18 S4-03 S5-21 S3-01 S4-17 S3-33 S3-23 S5-22 S3-32 S3-10 S5-02 S3-19 S2-03 S1-13 S4-15 S2-32 S1-06 S2-39 S1-04 S3-20 S2-01 S5-04 S1-26 S2-11 S4-24 S1-20 S5-26 S1-08 S4-21 S2-13 S3-24 S2-40 S5-19 S2-35 S4-01 S4-10
Paper session . . . . . .The

use of evaluation in public policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . evaluation tools and technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5 8

Paper session . . . . . .New Panel . . . . . . . . . . . .New Paper session . . . . . .The

steps with Contribution Analysis: strengthening the theoretical base and widening the practice . . . . . 10 outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 evaluation quality standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 climate change and energy efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

role of evaluation in the civil society I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Paper session . . . . . .Defining Panel . . . . . . . . . . . .Assuring

Paper session . . . . . .Evaluating Panel . . . . . . . . . . . .Does

Performance Management Have a Future? Issues and Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 and Evaluation: Approaches and Practices I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 of humanitarian aid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

Paper session . . . . . .Gender

Paper session . . . . . .Evaluation Panel . . . . . . . . . . . .Equity

and Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 evaluation capacity through university programmes: where are the evaluators of the future? . . . . 26

Panel . . . . . . . . . . . .Building

Panel . . . . . . . . . . . .Identifying

and assessing capacity development outcomes: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 perspectives from the EU, UN, OSCE and the Council of Europe and Equity: Improving the Evaluation of Social Programmes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 use and useability I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 multiple perspectives in judging value in a networked evaluation world . . . . . . . . . . . . . . . . . . . 33 and combining evaluative approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 and evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

Panel . . . . . . . . . . . .Equality

Paper session . . . . . .Evaluation Paper session . . . . . .Auditing

Panel . . . . . . . . . . . .Managing

Paper session . . . . . .Comparing Paper session . . . . . .Network

effects on evaluation and organization I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 of health systems and interventions I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 and systems thinking for evaluators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 . . . . . . . . . . . . . . . . . . . . . . . . . 44

Paper session . . . . . .Evaluation

Panel . . . . . . . . . . . .Complexity Paper session . . . . . .M&E

systems and real time evaluation I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

Panel . . . . . . . . . . . .Holding

the state to account: using evaluation to challenge the theories, understandings and myths underpinning policies and programs

Paper session . . . . . .ICT Panel . . . . . . . . . . . .The

systems for evaluation quality and use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 impact of ethics: are code of conducts in evaluation networks necessary? . . . . . . . . . . . . . . . . . . . . . . 47 to evaluating research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 and governance I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

Paper session . . . . . .Approaches Paper session . . . . . .Evaluation

Panel . . . . . . . . . . . .International

Organization for Collaborative Outcome Management (IOCOM) . . . . . . . . . . . . . . . . . . . . . . 52 The value and contribution to evaluation in the networked society in government and organizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 ongoing and ex-post evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

Paper session . . . . . .Evaluation

Paper session . . . . . .Monitoring, Panel . . . . . . . . . . . .The

international evaluation partnership initiative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 by Results: What results? How should future evaluations of such approaches be undertaken? . . . 58 for improved governance and management I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 of local, regional and cross border programs I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 . . . . . . . . . . . . . . . . . . . . . . . 66

Panel . . . . . . . . . . . .Payment

Paper session . . . . . .Evaluation Paper session . . . . . .Evaluation Paper session . . . . . .Evaluation Panel . . . . . . . . . . . .Building

the capacity of beneficiary countries in monitoring and evaluation. contrasting methods and experience

Panel . . . . . . . . . . . .Tools

and methods for evaluating the efficiency of development interventions . . . . . . . . . . . . . . . . . . . . . . 67 Power, Power of Evaluation and Speaking Truth to Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 Approaches to Impact Evaluation: Session 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 and employment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 of public policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

Panel . . . . . . . . . . . .Evaluation Panel . . . . . . . . . . . .Innovative Paper session . . . . . .Evaluation

Paper session . . . . . .Evaluability

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-46 S1-11 S4-25 S5-27 S1-14 S5-14 S2-15 S2-36 S2-37 S4-04 S2-18 S3-26 S2-31 S5-16 S2-16 S2-22 S2-20 S3-02 S1-07 S2-21 S2-34 S3-22 S4-12 S2-12 S2-43 S3-11 S2-06 S5-23 S2-45 S2-07 S3-27 S5-05 S2-17 S2-05 S2-44 S1-17 S5-28 S5-08 S1-22 S3-05 S2-14 S4-27 S4-29 S3-17 S5-20 S3-14 S2-02 S4-28

Panel . . . . . . . . . . . .Agency

And Evaluative Culture: Contributions Of Feminist Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 of international partnerships and collaborative networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 of income support, credit and insurance interventions I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

Paper session . . . . . .Evaluation Paper session . . . . . .Evaluation Panel . . . . . . . . . . . .Joint

Evaluation of Dutch Development NGOs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 effects on evaluation and organization II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 evaluation practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 Approaches to Impact Evaluation: Session 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

Paper session . . . . . .Network Paper session . . . . . .The

interaction of evaluation, research and innovation I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

Paper session . . . . . .Improving Panel . . . . . . . . . . . .Innovative Panel . . . . . . . . . . . .What Paper session . . . . . .The

is excellent? The challenge of evaluating research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 ethics in evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 in Canada: two years later . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 management and evaluation: love at first sight or marriage of (in)convenience? . . . . . . . . . . . 95

impact of values and dispositions on evaluation approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

Paper session . . . . . .Integrating

Panel . . . . . . . . . . . .Credentialing Panel . . . . . . . . . . . .Performance Paper session . . . . . .The

role of evaluation in civil society II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 methodologies in development evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 or improved evaluation approaches I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .100 evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .102 and Evaluation: Approaches and Practices II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .104 evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .108 systems and real time evaluation II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .106 in evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .109 for equitable development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .110 of innovation policies and innovative programmes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .111 of competencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .114 use and useability II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .117 EU evaluation practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .119 and evaluating organisational culture: searching for the right questions . . . . . . . . . . . . . . . . . .121

Paper session . . . . . .Innovative Paper session . . . . . .New Paper session . . . . . .Meta

Paper session . . . . . .Gender Paper session . . . . . .M&E

Paper session . . . . . .Multinational Panel . . . . . . . . . . . .Theories

Panel . . . . . . . . . . . .Evaluation Paper session . . . . . .Evaluation Paper session . . . . . .Evaluation Panel . . . . . . . . . . . .Risk

Assessment, Monitoring and Evaluation in Food Safety the case of the Codex Alimentariu . . . . . . . .116

Paper session . . . . . .Evaluation Paper session . . . . . .Improving

Panel . . . . . . . . . . . .Monitoring Panel . . . . . . . . . . . .The

Roles and Complementarity between Monitoring and Evaluation Functions . . . . . . . . . . . . . . . . . . . . .122 innovation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .123 Research Excellence for Evidence-based Policy: The Important Role of Organizational Context . .126 and governance II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .127 methods in evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .129

Paper session . . . . . .Evaluating Panel . . . . . . . . . . . .Evaluating Paper session . . . . . .Evaluation Paper session . . . . . .Innovative Paper session . . . . . .Probing Panel . . . . . . . . . . . .A

the logic of evaluation logics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .131 networks and knowledge sharing I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .134 in a European context I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .138

European Evaluation Theory Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .133 Role of Philanthropic Foundations in Development Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .137

Paper session . . . . . .Evaluation Panel . . . . . . . . . . . .The

Paper session . . . . . .Evaluation Panel . . . . . . . . . . . .Sharing

information in the networked society . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .140 options and requirements for the publication of (evaluation) results Empowerment and Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .141 based policy and programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .143 of research programmes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .145

Paper session . . . . . .Equity,

Paper session . . . . . .Evidence

Paper session . . . . . .Evaluation Panel . . . . . . . . . . . .Using

research to rethink programme implementation: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .147 The Health Systems Strengthening Experience in Nigeria and gender mainstreaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .148 data and performance assessment I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .151 the contribution of programs in complex environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .153 the micro-macro disconnect in the evaluation of climate change resilience . . . . . . . . . . . . . . .155 Monitoring and Evaluation (M&E) Readiness Evidence for Evaluation . . . . . . . . . . . . . . . . . . . . . . .150

Paper session . . . . . .Evaluation Panel . . . . . . . . . . . .Valuing

Paper session . . . . . .Evaluation Paper session . . . . . .Capturing

Panel . . . . . . . . . . . .Addressing

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S3-25 S2-47 S5-09 S2-28 S1-25 S4-23 S4-19 S2-24 S3-06 S3-07 S2-09 S3-28 S1-16 S2-29 S4-31 S4-09 S3-15 S5-10 S3-03 S5-15 S3-13 S1-23 S1-21 S5-18 S5-12 S4-30 S3-31 S1-09 S1-01 S2-33 S2-10 S3-08 S1-15 S4-14 S2-38 S4-32 S1-28 S5-13 S3-09 S3-21 S2-30 S5-24 S4-22 S2-23 S4-02 S4-26 S1-18

Panel . . . . . . . . . . . .Evaluation

Capacity Development for International Development: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .156 Lessons Learned across Multiple Initiatives the Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .157 in a European context II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .158 monitoring and evaluation tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .160

Panel . . . . . . . . . . . .Meet

Paper session . . . . . .Evaluation

Paper session . . . . . .Perfomance Panel . . . . . . . . . . . .Using

theories of change and evaluation to strengthen networks: the case of evaluation associations . . . .162 of Health Systems and Interventions II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .163

Paper session . . . . . .Evaluation Paper session . . . . . .Real Paper session . . . . . .New

time evaluation for decision-making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .164 methods for impact evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .166 capacity building and regional development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .168 in complex environments I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .172 Development: Learning from experience I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .170

Paper session . . . . . .Evaluation Paper session . . . . . .Capacity

Paper session . . . . . .Evaluation Panel . . . . . . . . . . . .The

Future of Evaluation (and What We Should Do About It) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .174 networking, network associations and evaluation I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .175 outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .177 aggregated analysis to enhance learning in development cooperation . . . . . . . . . . . . .179 environmental and social impacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .181 data and performance assessment II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .183 in an educational Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .185 policies, human rights and development evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .187

Paper session . . . . . .Social

Paper session . . . . . .Predicting

Panel . . . . . . . . . . . .Meta-Evaluations Paper session . . . . . .Evaluating Paper session . . . . . .Evaluation Paper session . . . . . .Evaluation

Paper session . . . . . .Gender-sensitive Paper session . . . . . .The

interaction of evaluation, research and innovation II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .189 credibility and learning II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .191 and communication technology for development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .192 in development evaluation Experiences from DAC, ECG, UNEG . . . . . . . . . . . . . . . . . . . . . . .193

Paper session . . . . . .Evaluation

Panel . . . . . . . . . . . .Information Panel . . . . . . . . . . . .Networking Panel . . . . . . . . . . . .Jordans

Evaluation and Impact Assessment Unit: lessons in evaluation capacity building . . . . . . . . . . . . . .194 in the Health Care Sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .195 the Paris Declaration on aid effectiveness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .197 empowerment: integrating theories of change, theoretical frameworks and M&E . . . . . . . . . .198 for improved governance and management II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .199

Paper session . . . . . .Evaluation Panel . . . . . . . . . . . .Evaluating Panel . . . . . . . . . . . .Evaluating Paper session . . . . . .Evaluation Paper session . . . . . .Open Panel . . . . . . . . . . . .Joint

source, data exchange and evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .202 in complex environments II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .205

evaluations: Advancing theory and practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .204 Development: Learning from experience II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .207

Paper session . . . . . .Evaluation Paper session . . . . . .Capacity Paper session

. . . .Social networking, network associations and evaluation II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .209 security and livelihood protection evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .211 conferences and events: new approaches and practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .213 evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .214 in Turbulent Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .215

Paper session . . . . . .Food

Panel . . . . . . . . . . . .Evaluating

Panel . . . . . . . . . . . .Comprehensive Panel . . . . . . . . . . . .Evaluation Paper session . . . . . .The

influence of (New) Public Management Theory on Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .216 Development: Learning from experience III . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .218 the debate: what is ethical practice in international development evaluation? . . . . . . . . . . . . . .221 evaluation in the EU: a simple idea and a hard practice in a complex context . . . . . . . . . . . .224

Paper session . . . . . .Capacity

Panel . . . . . . . . . . . .Reframing Paper session . . . . . .The

use (and abuse) of evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .222 of local, regional and cross border programs II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .225

Panel . . . . . . . . . . . .Environmental Paper session . . . . . .Evaluation Paper session . . . . . .New

or improved evaluation approaches II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .226 evaluation through cost benefit and systems analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .228 of income support, credit and insurance interventions II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .230 networks and knowledge sharing II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .232

Paper session . . . . . .Ex-ante

Paper session . . . . . .Evaluation Paper session . . . . . .Evaluation

Poster session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .234 List of Speakers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 List of Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249


W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

Oral Presentations
S5-03
S5-03 Strand 5 Paper session

The use of evaluation in public policy


Wednesday, 3 October, 2012
O 001

9 : 3 0 1 1 : 0 0

The use of evaluation in public policy: An analysis of evaluations in the Norwegian government 20052011
J. Askim 1, E. Doeving 2, A. Johnsen 3
1 2

University of Oslo, Department of Political Science, Oslo, Norway Oslo and Akershus University College of Applied Sciences, School of Business, Oslo, Norway 3 Oslo and Akershus University College of Applied Sciences, Department of Public Management, Oslo, Norway

The ideal of using evaluation as a policy tool is strong in public management. The policy of using evaluation has gained strength over the latest thirty years and much public money is spent on evaluations in many countries. However, according to Murray Saunders is evaluation practice as an object of research completely underdeveloped. Earlier research indicates that the quality of the evaluations impacts on the propensity of the evaluations to be used in political decision making processes. This paper aims to address this gap in the evaluation theory by empirically analysing the evaluation practice in the Norwegian government. The governmental public management regulations demand that all ministries and agencies evaluate their activities for example by cost-benefit analyses, process and impact evaluations. The Norwegian Government Agency for Financial Management has recently developed a database documenting all evaluations conducted for the ministries and agencies in Norway since the mid 2000s. In this paper we analyse these data with an emphasis on describing traits of the evaluations and the development in how the government has used evaluations as a policy tool over time. Preliminary analyses show that there have been more than 700 evaluations conducted since 2005. This paper categorises these evaluations by for example policy areas, type of evaluation, methods employed, providers of evaluations as well as size of the evaluations. The paper then analyses the data and explores hypotheses regarding how different governmental bodies design evaluation as policy tools and how different political and administrative factors affect the use of evaluation in government. Keywords: Evaluation database; Design of evaluations; Use of evaluations;

O 002

Ex ante evaluation in The Netherlands


D. Hanemaayer 1
1

Beleidsevaluatie.info, Oegstgeest, Netherlands

Ex ante evaluation can be an important instrument in the preparation of policy and programmes. Ex ante evaluation is about delivering information to politicians information about the problems to be tackled by policy, about policy-goals, about policy-instruments and about costs and benefits of policy or programmes. With a good ex ante evaluation on the desk, politicians are able to take good decisions, which means effective and efficient policy and programs. Since the beginning of the 21st century we see a broad uprise of ex ante evaluation (impact assessment) in international organisations (for example EU, OECD, UN). But we hardly see a comparable development on the level of national states and below. We will present an overview of the Dutch ex ante-landscape evaluation in the period since 2000. Stable applications of types of ex ante evaluation are for example the mandatory environmental impact assessment for fysical investments; the assessment of draft laws by the Council of State; the assesment on administrative burdens; the application of cost-benefit analysis (a specific form of ex ante evaluation) by the decision-making on infrastructural investments; the stream of cost-benefits analysis by the CPB (the Netherlands Bureau for Economic Policy Analysis); and a recently introduced assessment-instrument. We describe the slowly growing interest in social cost-benefit analysis, and we describe the very limited use of ex ante evaluation on goal-attainment, but we will show some Dutch examples of this kind of ex ante evaluation. We will plea for application of ex ante evaluation of policy-goals and not only of costs and benefits, in discussing the theoretical underpinnings of cost-benefit analysis. We will discuss the relationship between ex ante and ex post evaluation. And we will discuss in general the the limited the drives for application of ex ante evaluation in a period in which fact-free policies seems to grow in popularity. Although ex ante evalution is surely not the ultimate panacee for all shortcomings in policy-making, we plea for using ex ante evaluation of policy-goals in the preparation of complex policy-issues because it surely can contribute to effective and efficient policy. Keywords: Ex ante evaluation; Goal attainment;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 003

Theory-based evaluation: insights from public management theory for the assessment of the ESF assistance to administrative capacity building in Lithuania

S5-03

V. Nakrosis 1
1

Public Policy and Management Institute, Vilnius, Lithuania

Theory-based impact evaluation was identified as one of the main evaluation approaches for assessing the effective delivery of the European Cohesion policy. The underlying assumption of theory-based evaluation is that every intervention constitutes a theory whose causal chain from inputs to impacts could be tested by evaluators. While the theory-based approach is usually associated with impact evaluation, it could also be beneficial for the interim evaluation of policy interventions. There is little evidence about the potential benefits of theory-based approach in the new intervention area of administrative capacity building, where the ESF support was provided for the first time in the 20072013 programming period. Although the previous evaluations of administrative capacity building considered such governance issues as the structure of governance and the decentralisation process in the selected EU Member States, there have been no critical explanations of the ESF support to administrative capacity building based on public management theories. A good ground for testing the main assumptions and characteristics of theory-based evaluation in this policy area is Lithuania, where the largest share of ESF assistance (18%) among all EU10 countries was allocated for administrative capacity building. The proposed presentation will share the main insights from the application of public management theory in the assessment of the European Social Fund assistance to administrative capacity building in Lithuania. It will be based on the evaluation of Priority 4 Strengthening administrative capacities and increasing efficiency of public administration of the ESF-supported Human Resources Development Operational Programme commissioned by the Lithuanian Ministry of Finance and carried out by the Public Policy and Management Institute (Vilnius, Lithuania) in 2011. This evaluation drew upon the general models of public management (traditional public administration, the New Public Management and governance) and more specific propositions from the academic literature of policy implementation, project management and change management that were synthesised into a single theoretical framework. The evaluation employed a mixed (quantitative-qualitative) methodological approach to data gathering and analysis involving such methods as desk research, analysis of the monitoring data, statistical analysis, interviews, surveys and case studies. Among other things, the evaluation found that certain assumptions of organisational change behind some measures of Priority 4 failed to materialise during the programme implementation in the context of the financial-economic crisis, overloaded agenda of state/municipal institutions and staff lacking motivation. A combination of the limited organisational maturity of state/municipal institutions and the insufficient supply of capacity building services in the domestic market constrained the effective delivery of various capacity building interventions. Furthermore, the NPM-based implementation approach to strengthening administrative capacities did not enable result-based orientation of the ESF assistance. From the theoretical point of view, although no impact assessment of capacity building interventions was possible in the middle of the programme implementation, the evaluation successfully tested several hypotheses of organisational change and identified a number of factors critical for the design and implementation of these interventions. The lessons of this evaluation could be useful for the future management and evaluation of ESF-supported interventions in the area of administrative capacity building. Keywords: Theory-based evaluation; Public management; European Social Fund; Administrative capacity building;

Wednesday, 3 October, 2012

9 : 3 0 1 1 : 0 0

O 004

Evaluating legislation the experience of the EU VAT evaluation


G. Ebling 1, J. Berlinska 1
1

European Commission, DG TAXUD, Brussels, Belgium

The European Commission has long been an active player in the development of evaluation, its standards and good practices, particularly within the expenditure measures. It is however fairly recent that emphasis has been put on evaluation of legislation and other non-budgetary measures; the expertise and know-how is growing. In this context, DG Taxation and Customs Union has conducted a retrospective evaluation of the most pertinent elements of the EU VAT system. The triggers of the evaluation The VAT complexity results in administrative burdens for businesses; dealing with VAT accounts for almost 60% of the total burden measured for 13 priority areas identified in the context of the Better Regulation Agenda. On the other hand, the growing dependence of the EU Member States on VAT as a source of revenue for financing their policies highlighted also the need to tackle its vulnerability to fraud. The technological progress and changes in the economic environment created as many opportunities as threats and challenges, revealing the potential vulnerability of the current EU VAT structure. The scope of the evaluation The evaluation looked into the design and implementation of certain VAT arrangements, identified as most pertinent, assessing their effectiveness and efficiency in terms of effects they had created. It examined as well their relevance and their coherence with the notion of the smooth functioning of the single market. The emphasis was put on the economic aspects of the cross-border phenomena related to the VAT and their consequences for the single market. Methodology Given its economic nature, the study reconciled to the maximum possible extent the classical evaluation methodology with economic modelling.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

Results VAT exemptions are the most obvious and probably the most economically damaging, creating significant distortion and deflection of trade of goods and services, reducing productivity and output and generally impeding the successful completion of the single market; the extensive use of reduced rates creates little desirable effects at a high fiscal cost and they add to the complexity of the system; there are far too many differences n VAT procedures across the EU Member States it is estimated that a 10% reduction in differences in VAT procedures could boost intra-EU trade by as much as 3.7 % and GDP by up to 0.4 %; VAT requirements generate high administrative and compliance cost for businesses and administrations; The level of VAT evasion and avoidance is still worrisome. Keywords: Policy; Legislation; Economic; EU;

S5-03

Wednesday, 3 October, 2012

9 : 3 0 1 1 : 0 0

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-26 Strand 2

Paper session

New evaluation tools and technologies


S2-26
O 005

Capturing Technology for Development


A. E. Flanagan 1, S. Wegner 1, K. Ruiz 1
1

World Bank Group, Independent Evaluation Group, Washington DC, USA

Wednesday, 3 October, 2012

9 : 3 0 1 1 : 0 0

The proposed paper Capturing Technology for Development is well aligned with the theme of the EES conference, Evaluation in a networked society. New concepts, New Challenges, New Solutions. The unprecedented increase in access to telephony and data services in developing countries has opened up opportunities to harness the potential of Information and Communication Technology (ICT) for development. In addition, the existence of a global network presents opportunities and challenges for the way in which development institutions deliver (and monitor and evaluate) their services to clients. In this context, the paper will capture both the effectiveness of World Bank Group support to ICT infrastructure and its enabling environment as a tool for development and also its experience integrating ICT components into projects to enhance the delivery of services to the public, and to make governments more transparent and effective. The proposed paper would present the results of a mix of approaches used to assess the effectiveness of public and private projects in the ICT sector and ICT-using sectors. One approach is an econometric analysis investigating the role of World Bank Group interventions (World Bank policy reforms and IFC investments) in influencing the speed of mobile diffusion in developing countries using a unique variable measuring the presence or absence of World Bank Group involvement in a countrys telecommunications sector. Establishing a correlation between public money and higher mobile telephony rates supports the argument that the World Bank Group has an important role to play in increasing access to information. The paper would also address the potential of using ICT to support other development issues and objectives, and outlines areas where the government and the private sector can play a role in supporting the roll out of networks, fostering the adoption of technology, and, importantly, supporting the necessary complementary factors for success.

O 006

New Developments in Using Administrative Data for Formative and Summative Impact Evaluation with Examples
G. Henry 1
1

Education Policy at Carolina, Public Policy University of North Carolina at Chapel Hill, Chapel Hill, USA

In the past decade, many administrative databases for education, social services, health and employment have been developed that can be fruitfully combined and used for evaluation. In this paper, we explore both some of the most promising methods for combining and analyzing these data, including longitudinal datasets, and show how they are beginning to be applied to issues of effectiveness and equity in resource distribution. Specifically, we will show how multiple administrative datasets have been combined to evaluate programs, personnel, and policy. In our examples, we demonstrate the use of these data to evaluate teachers, evaluate school reforms, and evaluate training and professional development programs. We include examples of formative impact evaluation and summative impact evaluation. The examples will not only illustrate the data and analytical techniques but findings that could stimulate improvement and novel insights concerning the equitable distribution of important resources, such as more effective teachers, that can come only from combining multiple datasets. The examples are based on the U.S. but the approaches can become applicable in the developed and developing world. Keywords: Administrative databases; Methods for formative impact evaluation; Methods for summative impact evaluation; Effectiveness; Equitable distribution of resources;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 007

The usefulness of game theory as a method for policy evaluation


L. Hermans 1, S. Cunningham 1, J. Slinger 1
1

Delft University of Technology, Faculty of Technology Policy and Management, Delft, Netherlands

S2-26

Most of todays public policies are formulated and implemented in multi-actor systems and networks. Interdependent actors jointly need to agree on shared policy objectives and associated measures and multiple actors influence subsequent policy implementation and its impacts. Evaluating policies that are formulated and implemented in such multi-actor systems requires insight into the interactions among actors, and how these influenced the outcomes. In planning theory it is long accepted that policies and programmes are not implemented as planned, but that important deviations occur during implementation. Such deviations and emergent strategies are even more likely when several (semi-)autonomous actors are involved in policy implementation. Thus, if evaluators seek to understand how policy impacts come about, they need to look into the black-box of implementation. Game theory has long been around as a method that supports a rigorous analysis of the interaction processes among actors. However, so far, it has not been widely applied in the evaluation field. And although there are known limitations involved in using game theoretic models in a real-world setting, there are also several examples of past applications where game theory has been helpful as part of policy analysis and institutional analysis. Hence, when searching for a new methodological toolbox to help analyze implementation processes in a networked society, questions regarding the usefulness of game theory as an evaluation method remain pertinent. This paper reports an application of game theory to evaluate implementation of coastal policy decisions in the Netherlands. In particular, it evaluates the implementation of a national policy that was first formulate in 1990, followed by regional implementation processes driven by local actors. Based on this case, the usefulness of game theory as a method for evaluations is explored. This is done by addressing methodological requirements such as analytical rigor, practical feasibility, and the usefulness of resulting insights. Keywords: Game theory; Multi-actor systems; Coastal policy; Policy implementation;

Wednesday, 3 October, 2012

9 : 3 0 1 1 : 0 0

O 242

Finding a Comparison Group: Is Crowdsourcing a Viable Option?


T. Azzam 1, J. Miriam 1
1

Claremont Graduate University, School of Behavioral and Organizational Sciences, Claremont, USA

This paper presents a research study exploring the methodological viability of crowdsourcing as way to create matched comparison groups. The study compares the results from a survey of a truly randomized control group to the survey results of a matched comparison group that was created using Amazon.coms Mturk crowdsourcing service. Both the control and the matched comparison group were compared to the truly randomized treatment group to determine the methodological reliability of crowdsourcing. Initial results indicate that this approach is a viable option for evaluation designs that do not have ready access to a comparison group, large budgets, or time. This methodological approach can help transform many real-world evaluations in a quick and cost-effective way. The paper will highlight the strengths and limitations of this crowdsourcing approach along with a description of the process for using crowdsourcing in evaluation practice. Increasingly evaluations are being used to understand how much programs impact those who participate. To obtain the most accurate measure of program participant outcomes, it is often recommended to compare outcomes for those participating in a program to outcomes for a randomized control group or a non-randomized comparison group with very similar characteristics to the program participants. Statistical techniques used to identify a good comparison group, such as propensity score matching, often require the evaluator to collect outcome data from an unfeasibly large number of participants beyond those already enrolled in the program. Amazons Mturk system, which is an online crowdsourcing service, is a website owned by Amazon whereby people can be recruited online to complete tasks such as filling out a survey or testing out a website. This system could allow evaluators to create a high-quality comparison group at a low cost. The current study aimed to demonstrate how Amazons Mturk and propensity score matching can be used to create a comparison group for an evaluation of a program for college students. Initial findings suggest that crowdsourcing is a viable approach for the creating comparison groups, however there are certain limitations to this approach. For example, crowdsourcing websites such as Mturk require that individuals be 18 years or older to participate, and this would exclude the utility of this method in the evaluation of many education programs serving children under the age of 18. But even with this limitation in mind, many program evaluations can benefit from the addition of a crowdsourced matched group because it is very cost effective, and data can be collected quickly in a matter of days. This could transform many evaluation designs and improve the internal validity of their results. Keywords: Crowdsourcing; Comparison group; Quasi-experimental design; Impact evaluation; Matched group design;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-41 Strand 2

Panel

New steps with Contribution Analysis: strengthening S2-41 the theoretical base and widening the practice
O 008

New steps with Contribution Analysis: strengthening the theoretical base and widening the practice
Wednesday, 3 October, 2012
J. Toulemonde 1, F. Leeuw 2, S. Lemire 3

9 : 3 0 1 1 : 0 0

1 2

EUREVAL and Lyon University, Lyon, France Dutch ministry of Justice, Den Hagen, Netherlands 3 Ramboll Management, Copenhagen, Denmark

Contribution Analysis is a pragmatic approach to applying the principles of theory based evaluation. It follows the causal chains in their whole length, reports on whether the intended changes occurred or not, and identify the main contributions to such changes, hopefully including the programme under evaluation. Over the last ten years, Contribution Analysis was repeatedly recommended in a number of evaluation guidelines and attracted visible interest in international events, including in the Prague EES Conference in 2010. However, the instances of rigorous implementation have been surprisingly scarce, and its theoretical foundations are still fragile. A special issue of Evaluation on Contribution Analysis, edited by John Mayne, will be issued by mid 2012. The session will gather several of the authors involved with this Special Issue. It intends to highlight the latest steps taken and the challenges ahead in terms of theoretical foundations, practicalities, and quality standards. The speakers will be Frans Leeuw (Dutch Ministry of Justice), Sebastien Lemire (Ramboll Management Consulting), Jacques Toulemonde (Eureval and Lyon University), Rob D. van den Berg (Global Environment Facility), and Erica Wimbush (National Health Service Scotland). The topics covered will be the following: (i) formulating validity contribution claims, (ii) investigating external factors in a systematic way, (iii) analysing catalytic effects, (iv) assessing the quality of a contribution analysis, and (v) satisfying the needs of evaluation users with a contribution analysis. Keywords: Causality; Impact evaluation; Theory based evaluation;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

10

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S5-11 Strand 5

Paper session

The role of evaluation in the civil society I


S5-11
O 010

PADev as a method for assessing agencies


W. Rijneveld 1, F. Zaal 2
1 2

Resultante, Gorinchem, Netherlands Royal Tropical Institute, Amsterdam, Netherlands

Wednesday, 3 October, 2012

9 : 3 0 1 1 : 0 0

PADev is an evaluation methodology developed since 2008 in Ghana and Burkina Faso. The perspective it adopts is that of the local community and in participatory workshops men and women, older and younger people, officials and common people investigate what the changes have been in their livelihood domains in the past thirty years. An inventory of all interventions, projects and initiatives in the area in that period allows them to track the relationships between the changes and the interventions, and the effects of interventions on various categories of people that they represent and describe. The effects on various wealth classes is a prominent issue. In one of the exercises people list the five to ten major agencies that intervene in their area and rate them on a number of criteria related to process and outcomes of interventions that were determined during previous exercises. These include issues like relevance, long term commitment, realistic expectations, honesty and transparency and level of participation. Combining the results from different subgroups provides a broader picture of how agencies are being rated by the population as well as interesting differences between men and women, young and old, or between subgroups from different geographical locations. This exercise indicates that peoples perceptions, when captured in a systematic way, can provide for an interesting alternative or addition to the usual way of organizational assessment. The fact that this is done through participatory workshops with different subgroups and with a scope that includes all local intervening actors and not just a single organization helps to eliminate biases towards a specific organization. The presentation will elaborate the method used, the outcomes of the assessment with an analysis of the different criteria, differences between types of actors (e.g. government and non-government), differences between gender and between locations. A number of examples will give details on the sort of information that is obtained and how it could serve to have intervening agencies strive for the position of best agency. The presentation will also discuss when and where this method may be a useful addition for the evaluators toolbox. Keywords: Participation; Organizational Assessment; Downward accountability;

O 012

Building capacities of Czech development CSOs based on the FoRS (Czech Forum for Development Cooperation) Code on Effectiveness
I. Pibilova 1, D. Svoboda 2, J. Bohm 3
1 2

Independent Consultant, Prague 10, Czech Republic Development Worldwide, Prague 2, Czech Republic 3 SIRIRI, Prague 2, Czech Republic

Czech Republic is an emerging donor, nevertheless, in the last 10 years more than 60 Czech CSOs have been involved in international development. Most of these CSOs formed the Czech Forum for Development Cooperation (FoRS) for joint policy, advocacy, capacity building and international networking. FoRS members have been engaged in the global debate on CSO development effectiveness since 2007, resulting in approving the FoRS Code on Effectiveness and conducting the first self-evaluation in 2011. The Code provides indicators in the following areas: 1. Grassroots knowledge, 2. Transparency and accountability, 3. Partnership, 4. Respect to human rights and gender equality and 5. Accountability for impacts and their sustainability. It is the first of its kind in the EU-12 and one of a few among the EU development platforms. The Code and the results of the self-evaluation have been widely shared. Workshops have been organized as per the needs identified. However, it has been realized that innovative solutions were needed to ignite a wider debate on development effectiveness and to foster capacity building among diverse actors, including less developed, often volunteer-based CSOs, individual experts and students. Therefore two new concepts have been launched: A. www.DevelopmentCoffee.org is an on-line platform based on crowdsourcing, therefore anybody can propose themes and vote. Monthly evening coffees on the most popular themes are hosted by different CSOs. After an introduction by experts who explain relevant links to the Code, an informal debate follows among diverse (non)state actors. B. Peer Reviews allow a structured, in-depth debate on each principle of the Code and mutual learning among self-selected peer CSOs. The process is facilitated through FoRS on-line forum. Once piloted, this approach will be shared with (inter)national platforms within the global post-Busan process on CSO development effectiveness. Advantages of both concepts lie in quick, easy set-up with zero costs as well as in capacity building fully driven by target groups. The EES presentation outlines underlying assumptions, methodology and key lessons learnt with respect to the concepts above and thus provides an impulse for creating similar initiatives of other actors or for utilising some elements elsewhere. The EES criteria are reflected as follows: 1. Relevance: The concepts can be utilised by evaluation practitioners, their networks and by any professional platform/network interested in free experience sharing and mutual learning. 2. Quality: The concepts will be introduced in a structured way including a demonstration and a short manual.
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

11

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

3. Theme of the Conference is directly addressed as the concepts utilise the networked society for enhancing development effectiveness. Both concepts are fully participatory, easy to access and share. Both use social media for promotion, idea generation and engagement. 4. Evaluation knowledge and skills: Both concepts provide an open platform for promoting, evaluating and sharing experiences on effectiveness principles.

S5-11

5. Creativity/innovation: DevelopmentCoffee.org is a unique crowdsourcing initiative in the EU development sector (acc. to authors knowledge). Peer reviews are also newly employed for enhancing CSO development effectiveness on the national level. 6. Public interest: The concepts were generated in an emerging donor country and bring a perspective of new actors. Keywords: Social media; Crowdsourcing; Self-evaluation; Peer review; CSO development effectiveness;

O 134

Wednesday, 3 October, 2012

The contribution of CSOs in development: the challenges of tracking and documentation of results in CSOs in Tanzania
9 : 3 0 1 1 : 0 0
D. Biria 1
1

Tanzania Evaluation Association, Administration, Dar es Salaam, Tanzania

Introduction In recent years many stakeholders have started gradually to realize the importance and contribution of the civil society sector in the development of the country. However, there remains one major challenge of CSOs failing to account for their contribution quantitatively and qualitatively. Background of the study: Since early 1990s to date there has been a sharp increase in the number of civil society organizations in the country. This is partly due to the democratization of the governance processes and economic liberalization policy which forced the government to pull out in business and provision of some services. The gap created by withdrawal of Government was filled in by the CSOs and the private sector. Dr. L.Ndumbaro (2007), Foundation for Civil Society (FCS) (2006) and others have shown that CSOs contribution is enormous but needs to be substantiated by facts and figures. To this end, if CSOs are to prove their relevance they have to invest in monitoring and evaluation so that they will be able to track and document results. Problem statement Findings of CSOs Capacity Assessment commissioned by the UNDP (UNDP, 2006) and FCS (2009) showed that CSOs fail to effectively show the results of their interventions. Though, CSOs boast to contribute immensely to development there are remains no evidence in writing. What is really missing is what comes out of those interventions and not activity targets only. Management questions (a) Why do CSO leaders fail to track results of their work? (b) What can be done to overcome this situation? Research questions (a) What are the factors leading to low or no tracking and documenting results of the CSOs work? (b) What actions should be taken to establish and support M&E functions in CSOs work? Objectives: (a) Examine factors responsible for low tracking and documenting of CSOs work (b) Identify specific actions required to change the mindset of leaders in CSOs to realize the importance of tracking beyond activity and output level. (c) Establish how M&E systems can be put in place where they non-existent and strengthened where they exist within the sector. Significance of the study To improve tracking, documentation and dissemination of the CSO work in the country Methodology: Research design Personal semi structured interviews will be conducted followed up with visits to interview the CSOs members. Population There are a variety of CSOs in the country. According to Dr. Ndumbaro (2006) there are over 8,000 CSOs in the country. Sampling and Sampling Technique The study will adopt a stratified random sampling method to select CSOs for the study. 20 % of the total population will be used. Data Collection This study will collect both qualitative and quantitative data. The secondary data will contribute the making of background information and will determine the scope of primary data collection. Data Collection Instrument Both structured closed-ended and open ended questions will be used to get quantitative and qualitative information respectively. Keywords: Tracking; Outcome; Evaluation; Documentation; Dissemination;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

12

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-04 Strand 2

Paper session

Defining outcomes
S2-04
O 013

Seeking a Valid Attribution Model for Conflict Interventions


E. Brusset 1
1

Channel Research, Lasne, Belgium

Wednesday, 3 October, 2012

9 : 3 0 1 1 : 0 0

Brief bio: Mr Emery Brusset is an evaluation consultant specialised in conflict situations. He has been experimenting with different methods that allow for evidence based assessments in situations such as Afghanistan, or Melanesian societies, over 17 years of continuous experience. He is about to submit a PhD at LSE on complexity theory applied to evaluation, from which this presentation draws. The main challenge for the evaluation of interventions that occur in the context of conflict (particularly for the evaluation of peace interventions) is the causal link between specific outcomes and broad improvements in the situation. Conflict situations are particularly attuned to what a growing literature of complexity theory now defines as complex interconnected systems, including highly networked actors and societies that are bound together by strong tensions. That situation leads to low quality of predictions, and to poor correlations between causes and effects. If impact is defined as the actual positive, negative, intended, unintended, consequence of an outcome, peace impact assessment will be particularly challenging in networked conflicts (please note the link to the conference theme). The main bodies of thinking in this area today rely on variants of Theories of Change and contribution analysis, which are essentially tracking cascades of effects. The presentation will outline why these are not optimal solutions to apply in conflict interventions. The level of attribution remains very low, as shown in recent evaluations, such as the Joint Donor Evaluation of Peace-Building in Sudan and the Joint Donor Evaluation in Congo, both carried out under the aegis of the OECD in which the author played a central role. Simply put, the quality of evaluations of peace-building today is chronically poor, and this is not due to the quality of the teams, or to lack of time or information. The reasons are to be found in methods. The challenge for contribution analysis and Theories of Change is mainly to do with difficulty in assessing the strength of links in a causal chain, and with the phenomenon of evaporation of objectives, a chronic condition in the fast-moving change that prevail in the object of evaluation. To overcome this, we know of 3 main attribution models: 1. Baselines capturing initial conditions, to have a before/after reference. 2. Quasi-experimental design, using control groups that are not benefiting from the intervention. 3. Interaction models using the identification of outcomes, and their interaction with the key catalysts in a conflict dynamic. The proposed paper will outline the analytical steps in the latter attribution model, explaining the parallel to impact assessment as it is used in mining and petroleum projects, pointing out in which way it could be a better model of attribution for complex systems. The paper will rely on specific academic thinking which could inform evaluation, and on concrete examples (Afghanistan, Indonesia, New Caledonia, Colombia, Congo and Sudan) of what has worked and not worked in the implementation of conflict evaluations in recent years. Audience This paper will be written primarily for evaluation commissioners and managers, as well as a broader public interested in the evaluation of politics. It will present options for the proper use of evaluation as an analytical tool but also a tool for enhanced participation. Keywords: Conflict evaluation; Complexity theory; Theory of change; Contribution analysis; Impact assessment;

O 014

Developing an Index to Evaluate Effectiveness of Sanitation Program


R. S. Goyal 1, M. Chaudhary 1
1

Institute of Health Management Research, Jaipur, India

Effectiveness of investments in household sanitation program can best be assessed by going beyond the physical outputs (number of toilets built/percent population covered) and looking at the components like extent of toilets maintained and used by the community. It should also incorporate impact of program in terms of bringing a decline in morbidity associated with lack of sanitation and, improvement in the quality of life particularly of women (who are at the receiving end due to lack of sanitation facilities within the house). This paper seeks to discuss a theoretical framework to develop an index to assess the effectiveness of the sanitation program in urban and rural areas in developing countries. It is desirable the Sanitation Effectiveness Index should reflect on; Effectiveness of the interventions/program (policy, investment, reach, coverage etc.) Appropriateness and affordability of technology Socio-cultural and physical acceptability Outcomes and impact in terms lowered morbidity (associated with lack of sanitation), improved quality of life particularly of women (in terms of convenience, time saved, safety/ security) and improved general hygiene conditions

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

13

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

Given that in different societies/countries, standalone and cumulative contribution of these factors would be manifested differently, challenge would be to identify the most wide/common contributors/ variables affecting/acceptable to most nations. Similarly, we shall have to take a call on relative contribution/weightage of every one of these factors in the index. On the basis of our literature search, we are considering the following variables for the construction of index. Effectiveness of the interventions/program policy and political commitment (e.g., subsidy for construction of toilets), strategy and programs, budget allocation, mainstreaming, program implementation mechanism and human resources, geographical reach, population covered, public and private expenditure, proportion of toilets maintained and regularly used, monitoring and oversight. Appropriateness and affordability of technology cost effective and user friendly, widely available, whether appropriate in local contexts (like scarcity of water, recycling of excreta). Socio-cultural and physical acceptability knowledge, understanding and appreciation of usage and benefits, priority, social relevance (gender concerns, equal access to all), culturally acceptable (e.g., building a toilet in the house).

S2-04

Wednesday, 3 October, 2012

9 : 3 0 1 1 : 0 0

Outcomes and impact prevalence of diseases associated with lack of sanitation [like; Diarrhea, Malaria, Schistosomiasis, Trachoma. Intestinal helminths (Ascariasis, Trichuriasis, Hookworm), Japanese encephalitis. Hepatitis-A, Arsenic, Fluorosis], family expenditure on medical care for these diseases, reach of toilet program to all sections of society, quality of life (particularly of women) in terms of convenience, time saved, safety/ security, benefit accrued from recycling of excreta and improvement in general hygiene conditions. The paper will review the efficacy of including these variables in the index. To determine the relative contribution/weight of every variable in the index, a principal component factor analysis will be attempted. Loading of different variables on the principal component will be used to determine the relative weights of different variables in the index. We shall attempt to validate this index on data from India. Keywords: Sanitation; Evaluation; Effectiveness; Index; Outcomes and impact;

O 015

Measurement in evaluation: what do the numbers actually tell us?


A. Doucette 1
1

The George Washington University, The Evaluators Institute, Washington D.C., USA

The relevance of evaluation is often thought about in terms of assessing the need for intervention, profiling trajectories of change as a result of program exposure (outcome), estimating program impact, determining cost benefit, and so forth. While there is general agreement on the importance of evaluation, there is much discussion on how to build an adequate evidence base on which to investigate what works for whom, under what conditions, and why. In a world where social program costs are rising exponentially, measuring program outcomes and impact is becoming a significant tool in informing program and policy decisions. Program outcome research is designed to address several questions. Is the program effective? What amount of exposure is necessary to get a good outcome? Is one approach better than another? What accounts for variance in outcomes? What is the relationship of outcome to cost? These are just some of the questions asked by evaluators and policy decision-makers, governments and funders. What is common to the evaluation of all of these issues is their dependence on measurement. Although we applaud the use of sophisticated analytic models that allow us to parcel out the variance attributed to program and participant characteristics and the contribution of specific approaches and circumstances; we rarely question the soundness of the measures used to support the evaluation decisions made about how a program/intervention works, how participants change, and how effective a program is in terms of participant/stakeholder outcomes. More often than not, we assume measurement precision as opposed to scrutinizing the quality of the measurement we relied on to support the theories that are developed, and the decisions that are make about a program or intervention. The presentation focuses on the role of measurement, the assumptions we make in selecting measurement approaches and the goodness of fit between the selected measurement metrics and the objectives of the evaluation. While measurement is only one step in the evaluation process, it is nevertheless, the foundation. The complexity of the evaluation environment presents new challenges. For example, while educational reform efforts in developing countries target 21st century skills (communication, collaboration, self-managed learning, etc.) evaluations of those efforts continue to rely on standardized tests. The presentation will focus on the measurement approach taken in educational reform initiatives in developing countries, highlighting the incorporation of sophisticated measurement models (Item Response Theory); what we can learn from from model fit, and more importantly, model misfit; and, whether measurement data adequately address the evaluation objectives. The challenges of applying measures to diverse populations will also be addressed, with specific emphasis on how measures and participant response options may function differentially than expected. Data from several evaluation studies will be used in illustrating often ignored measurement challenges for evaluation practice. Keywords: Measurement; Methods;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

14

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S3-18 Strand 3

Panel

Assuring evaluation quality standards


S3-18
O 016

Assuring Evaluation Quality Standards in a Networked society


F. Etta 1, E. Kaabunga 2, A. Sibanda 3
1 2 3

African Evaluation Association, Lagos, Nigeria African Gender & Development Evaluators Network, Nairobi, Kenya African Gender & Development Evaluators Network, Harare, Zimbabwe

Wednesday, 3 October, 2012

9 : 3 0 1 1 : 0 0

There is no doubt that evaluation especially of international development is continuing to attract global attention and many efforts are underway to improve evaluation quality. Most professional evaluation networks and international development organisations have developed evaluation guidelines, norms and/ or standards with the purpose of enhancing the quality of the evaluation process and its products. Three reasons inform this panel discussion; the quality of evaluations and that of evaluation practice especially in Africa is receiving increasing attention as the global debate rages about development outcomes and impacts of development aid. This debate has been couched as one primarily of methods following the powerful emergence of the Impact Evaluation movement in the past 67 years with the resurgence of Randomized Control Trials (RTCs) and experimental designs as the gold standard/method for impact evaluation. Advocates of other evaluation methods insist that evaluating impact is a legitimate evaluation endeavour and is not the preserve of any one particular method. What the debate has spawned among other things is a new attention to quality assurance and control in evaluation. The second reason for this panel emanates from the role of the proposer in professional evaluation associations. As a member of the 4 African evaluation Associations; two national and two continental, I am curious to understand and see how other evaluators, mangers, as well as commissioners of evaluations use evaluation standards and guidelines to improve evaluation quality. The third and final reason, concerned with the technological nature of contemporary society, is related to my attempt as a practitioner to find ways that this learning can be massified, maximised and or quickened by Information and Communication technologies which have changed the way that contemporary society works. It is our belief that reflecting on practice is something we do not do very often and yet it is critical for learning. The Spring 2011 volume of New Directions for Evaluation # 129, published by the American Evaluation Association & Jossey-Bass is devoted to Lessons and reflections from evaluations. Michael Morris (2011) while acknowledging that major developments have taken place in the domain of evaluation ethics, observes a serious shortage of rigorous, systematic evidence that can guide evaluation or that evaluators can use for self-reflection or for improving their next evaluation, (Morris, American Journal of Evaluation, 32(1),134151. The point is made that whereas there is little or scanty research on ethics (Henry & Mark, 2003), guidelines, principles, or standards, are often generously heaped on evaluators. The American Journal of Evaluation carries in each volume, Guiding Principles for Evaluators. In 2007 the African Evaluation Association (AfrEA) approved the African Evaluation Guidelines adapted from the AEAs Programme Evaluation Standards. The United Nations Evaluation Group, (UNEG) approved Norms, Standards for evaluations within the UN system; the OECD DAC published the Quality Standards for Development Evaluation (2010). Each Panel member will consider these questions: How was quality Assured/maintained in your last evaluation? What effect did using Standards, Guidelines have in the quality assurance? Was your capacity improved for the next evaluation? How? Keywords: Quality; Standards; Guidelines; Ethics; Practice;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

15

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S4-03 Strand 4

Paper session

Evaluating climate change and energy efficiency


S4-03
O 017

A Recipe for Success? Randomized Free Distribution of Improved Cooking Stoves in Senegal
J. Peters 1, G. Bensch 1
1

RWI, Essen, Germany

Wednesday, 3 October, 2012

9 : 3 0 1 1 : 0 0

Today more than a third of the world population is relying on biomass as primary cooking fuel with profound implications for peoples well-being: Wood provision is often time-consuming and the emitted smoke has severe health effects both burdens that afflict women in particular. In addition, in arid regions, firewood extraction contributes to deforestation. The dissemination of improved cooking stoves is frequently considered an effective remedy for these problems. It has recently gained momentum through the launch of the Global Alliance for Clean Cookstoves, whose objective is to make 100 million households adopting clean cookstoves by 2020. While still being wood-based and typically locally produced, ICS reduce the woodfuel consumption substantially by higher combustion and heat transfer efficiency. Despite this seemingly superiority, the ICS technology does not pave its own way into African households. Against this background, in this paper we evaluate the impacts and take-up behaviour of improved stove usage through a randomized controlled trial among 250 households in rural Senegal. The evaluation was commissioned by the Independent Evaluation Unit of Deutsche Gesellschaft fr Internationale Zusammenarbeit (GIZ). In addition to the random treatment, the virtue of our collected data is that it contains detailed information on cooking behaviour and fuel usage on a per dish basis. This allows us to accurately estimate firewood savings, since we are able to account for both household-specific characteristics and dish-specific cooking patterns such as the number of persons the meal is cooked for and the type of dish. To collect this quantitative information, we used structured questionnaires that covered virtually all socio-economic dimensions that characterize the households living conditions. This data is complemented by qualitative information from semi-structured interviews and focus group discussions with selected key informants such as women groups, stove and charcoal producers, and village chiefs. Concerning the take-up behaviour, we find that the intention to treat is successful, since virtually all households who won an improved stove also use it. This was an intriguing finding considering that this was doubted by many stakeholders in the preparation phase of the study including experts from the cooperating stove dissemination project implemented by GIZ and the authors of this study: Households that do not voluntarily decide to obtain an ICS were not expected to completely use the ICS. Total firewood consumption declines significantly by around 30 percent. Furthermore, we observe a significant decrease in cooking duration and respiratory disease symptoms. It should not remain unmentioned that the impact patterns pretty well match the qualitative perceptions of female members from ICS owning households captured during complementary focus group discussions. These findings substantiate the increasing efforts of the international community to improve access to improved cooking stoves and call for a more direct promotion of these stoves. Keywords: Impact evaluation; Randomized controlled trial;

O 018

Evaluating climate change adaptation and mitigation efforts in Kenya: Challenges opportunities and wayforward
V. Simwa 1
1

Monitoring and Evaluation Directorate(Government of Kenya), Communication and Advocacy, Nairobi Kenya, Kenya

Evaluation is globally being adapted as a critical component in the implementation of policies, programmes and projects. The inclusion of monitoring and evaluation (M&E) in the planning process and as a management tool ensures that policy makers and project managers make decisions from a knowledgeable point of view. In Kenya, Monitoring and Evaluation as an instrument of knowledge management was officially embraced by the government in 2003 at the election of the National Rainbow Coalition (NARC) government into power taking over from the KANU party which had been in power for twenty four years. This was done by way of introducing a National Integrated Monitoring and Evaluation System (NIMES) which aimed to track the implementation of public sector programmes within the Public Sector Reforms. Since 2005, the government of Kenya has been able to provide yearly progress reports on the implementation of the National Development Plans. The period 20032007, found Kenya implementing the Economic Recovery Strategy for Wealth and Employment Creation (ERSWEC) which gave way to the Kenya Vision 2030 which aims at enabling the country achieve middle income levels by the end of the plan period. To ensure successful implementation of development programmes, the government is actively supporting the implementation of the NIMES which provides a platform for integrating monitoring and evaluation components from various sectors of the economy. Core to the efforts of tracking progress in the implementation of policies and programmes is the challenge of c undertaking evaluation that would provide evidence for decision making. Kenya is amongst those countries that are grappling with the addressing climate mitigation and adaptation. Presently, the Ministry of Environment and Mineral Resources has embarked on an action plan making process for the National Framework for Climate Change Knowledge Management and Capacity Development. This framework is expected to provide: Longterm National Low Carbon Development Pathway; Enabling Policy and Regulatory Framework; National Adaptation Plan; Nationally Appropriate Mitigations Actions (NAMAS); National Technology Action Plan; National performance and Benefit Measurement; Knowledge Management and Capacity Development. This is a process that is bound to provide an opportunity to the Kenyan citizen to actively engage in climate mitigation and adaptation issues.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

16

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

This paper will attempt to examine how evaluation can be used to evidence for climate adaptation and mitigation. It will examine rights and responsibilities invested in governments as embodied in the various instruments t especially in the case of Kenya and how monitoring and evaluation can be used as a tool for assisting the promotion of transparency and accountability and the opportunity provided by the Kenya 2010 Constitution. I work as a Senior Public Information and Communication officer for the Monitoring and Evaluation directorate in Kenya.

S4-03

Keywords: Climate change; Adaptation; Mitigation; Challenges; Opportunities;

O 019

How did the Bank respond to the EE challenge in the context of a reinforced EU EE policy?
Wednesday, 3 October, 2012
M. Pfeffer 1
1

European Investment Bank, Operations Evaluation, Luxembourg, Luxembourg

9 : 3 0 1 1 : 0 0

The paper will present the structuring and findings of the currently ongoing evaluation of EIBs Energy Efficiency (EE) Financing in the EU from 2000 to 2011. The EIB is the policy bank of the European Union (EU). Its mission is to support EU policies (www.eib.org). There are a number of background trends that gave rise to a reinforced importance of the topic of EE on both the EU policy and the EIBs lending agenda, which include (1) the recognition of global warming as a threat to humanity and the identification of EE as a means to reduce the emission of greenhouse gases; (2) steeply rising energy prices, in particular from 2004 onwards, with EE considered as a means to redress that trend and reduce energy dependency ; and (3) the financial and economic crisis starting in 2008, which further increased the interest in EE for a variety of reasons: (i) the desire to foster the competitiveness of European economies through reduced energy cost, (ii) the interest to promote European industries specialised in the production of EE-related services, products and technologies and (iii) the opportunity to cash in rapid returns on investments which according to a range of studies are believed to be associated with EE investments. EU policy regarding EE was reinforced in particular from 2005 onwards, starting with the 2005 Greenpaper on Energy Efficiency or Doing More with Less, which in 2006 led to the Action Plan for Energy Efficiency. This first EU EE Action Plan establishes the objective (although not a binding requirement) of achieving within the EU a 20 % saving in primary energy consumption by 2020. In 2009, EE was included as an aim to be promoted by EU policy into the Treaty on the Functioning of the European Union. In March 2011, the European Commission adopted the Energy Efficiency Plan 2011. Whilst conceding that the EU is currently on course to only reach half of its EE target, the plan intends to get back on track to indeed achieve 20 % savings of the EUs primary energy consumption by 2020. Further, in that regard, a new EU EE Directive is currently under controversial discussion. The Bank responded to the reinforcement of the EUs EE policy by amending its eligibility criteria, including EE as a priority area into its Corporate Operational Plan for 200709, refining its EE lending policy and lending products, as well as through the cooperation with the EU under joint actions such as ELENA. The evaluation will explore in further detail how the Bank responded to the reinforcement of the EUs EE policy. This is being done through the in-depth evaluation of more than 20 projects financed by EIB during the last decade as well as through complementary sector analysis and survey work. The paper discusses the structuring and results of this evaluation, the particular challenge of which was that it has been carried out in a highly dynamic field of EU policy and under the uncertainties of the ongoing financial crisis. Increasingly, EE is also considered as a potential remedy to rising energy affordability issues amongst consumers. Keywords: Energy efficiency; Energy policy; Energy security; Climate change; Global warming;

O 020

Building Monitoring and Evaluation Systems for Climate Change Adaptation Projects: Challenges and Strategies towards Stakeholders Involvement.
P. Bell 1
1

Mercy Corps, Monitoring & Evaluation, Jakarta, Indonesia

ACCCRN program aimed to assist cities governments and communities in Asia to cope with the climate change challenges with ultimate goal catalyzing attention, funding, and action around climate necessary to build the resilience of poor and vulnerable Indonesian urban communities. In Indonesia, ACCCRN is conducted in two cities, Semarang and Bandar Lampung. Both cities are in Phase III where they implement climate change adaptation projects. To support these adaptation projects, to focus on the goals, and to keep achievements on track, a monitoring and evaluation system is being implemented for the following adaptation projects namely Groundwater Conservation through Application of Biopore Infiltration Hole Technology for Climate Change Adaptation, Strengthening and empowering teachers and student capacities in Urban Climate Change Resilience for Bandar Lampung city and Flood Forecasting and Warning System as Climate Change Adaptation Measures through Flood Risk for Semarang city. The implementation of these projects involves many stakeholders, including government, Local NGOs and Universities. The stakeholder involvement has its own challenges due to different level of understanding of the needs, used terms, approaches and methods for monitoring and evaluation, as well as clashes of different interest among these stakeholders and knowledge limitations on Climate Change Adaptation, which adds to the difficulties in building Monitoring and Evaluation systems for all projects. To cope with these challenges as well as to get stakeholders support and commitment for conducting monitoring and evaluation for climate adaptation projects, a range of strategies has been developed and implemented with expert consultations and discussions. It started with socialization on the importance of Monitoring and Evaluation and capacity building on Climate Change Adaptation. Using past experience and consultation with experts, Monitoring and Evaluation systems were designed.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

17

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S4-03

Related to the earlier mentioned challenges above the idea of multi-stakeholder Monitoring and Evaluation systems is; to keep it simple but specific, measurable, achievable, relevant, time bound, effective, and open enough to accomodate different scopes. Sakeholder feedback and input is collected to revise it. The revised version is then (re-)introduced to the project implementers. To guarantee their commitment the Monitoring and Evaluation team consist of people from all the different stakeholders involved in the project and the draft Monitoring and Evaluation tools become part of the projects contract requirements. The reporting mechanism, role and responsibilities should also be agreed by stakeholders involved. Outside Technical Assistance is provided to assist. By having a common understanding of and agreement on the specific Monitoring and Evaluation systems applied for adaptation projects, it contributes to smoothen the collaboration and encourage coordination among stakeholders, which inevitably leads to increased success of the adaptation project implementation. Keywords: Asian Cities Climate Change Resilience Network (ACCCRN); Stakeholders; Participatory processes; Stakeholder involvement; Climate change;

Wednesday, 3 October, 2012

9 : 3 0 1 1 : 0 0

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

18

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S5-21 Strand 5

Panel

Does Performance Management Have a Future? S5-21 Issues and Challenges


O 021

Does Performance Management Have a Future? Issues and Challenges


J. S. Bayley 1, J. Owen 2, R. Cummings 3, N. Stame 4

Wednesday, 3 October, 2012

1 2

9 : 3 0 1 1 : 0 0

Manager Performance Analysis and Compliance, Dept of Human Services, Melbourne, Australia Principal Fellow Centre for Program Evaluation, The University of Melbourne, Melbourne, Australia 3 Professor Educational Development and Evaluation, Murdoch University, Perth, Australia 4 Universita di Roma, La Sapienza, Italy

Over the last 20 years there has been a growing commitment by governments and organisations to the use of performance management. In most jurisdictions performance management appears to have two distinct purposes. The first is to meet the accountability needs of organizations within a hierarchical system. This can be thought of as conforming to a results-for-accountability perspective. Here the emphasis is on the use of performance management as a tool for organizational compliance and control. The second is based on an assumption that the collection and use of relevant information can materially affect the quality of decision making within an organization. This can be thought of as a results-for-management perspective. Here the emphasis is on the use of performance management as a tool to support organizational learning and continuous improvement. However, various reviews of the application of both perspectives has led to much disenchantment. This relates to issues such as the lack of evaluative expertise that affected the quality of evidence collected, and the meaningful use of information assembled from this evidence. At the same time, we are aware of systems and organisations that have benefitted from the judicious use of performance management frameworks. We argue that, to survive, performance management needs to be understood in terms of its underlying evaluative principles. Individual evaluators and others responsible for performance management regimes must have a clearer understanding of the strengths and weaknesses of performance management as a tool to support program accountability and improvement. In addition, as new technologies continue to increase the quantity, speed and accessibility of performance data, organizations will need to have processes in place to better manage performance information. These issues will be explored in a series of short presentations designed to encourage critical responses and discussion from the audience. John Owen will deconstruct the concepts of accountability and learning and explore the requirements of successful organizational systems. John Scott Bayley will draw upon his experience in the Australian public sector and with international aid organizations to consider why performance information is not sufficient by itself to drive service improvements. Rick Cummings will critique several current approaches to performance management in the context of key evaluation concepts. Following these presentations audience members will be invited to offer their questions and observations. The panel will brought to a close with Nicoletta Stame synthesizing the issues and implications that have arisen from audience and panel contributions. Keywords: Performance management; Utilization; Accountability; Organizational learning; Results;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

19

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S3-01 Strand 3

Paper session

Gender and Evaluation: Approaches and Practices I


S3-01
O 022

Gender issues in the evaluation of international aid. The experience of official British, Spanish and Swedish cooperation
J. Espinosa 1
1

University of Seville, Seville, Spain

Wednesday, 3 October, 2012

9 : 3 0 1 1 : 0 0

Gender equality was introduced into international development evaluation two decades ago. Over these years, there have been different experiences in incorporating gender issues into the evaluative exercises. In this paper, we analyze the evaluative experience of official British, Spanish and Swedish cooperation during the period 20002010 and how they have included gender equality in evaluation. We specially review the incorporation of gender perspective and gender issues in their Evaluation Units and in their evaluation practice. Firstly, we explain how gender issues have been included in the management procedures and methodologies in the following units: the Evaluation Department (EvD) of the DFID, in the case of United Kingdom; the General Direction for Planning and Evaluation of Development Policies (DGPOLDE) and the Office for Planning and Evaluation (OPE), in the case of Spain; and the Evaluation Department (UTV) of Sida and the Swedish Agency for Development Evaluation (SADEV), in the case of Sweden. Secondly, we discuss whether gender equality has been a central issue in the strategic evaluations of these three donors and what have been the main features of these strategic exercises focused on gender. According to this study, we present some learning about how to include gender equality as a key issue of the management of evaluations. In addition, we also highlight some central issues to consider when conducting a gender-sensitive evaluation. Keywords: Gender-sensitive evaluation;

O 023

Feminist Evaluation for Non-Feminists


D. Podems 1
1

Stellenbosch University, CREST, Cape Town, Republic of South Africa

Programs that aim to change the lives of women, the disempowered, and the poorest of the poor are implemented in developed and developing countries all over the world. Attached to these programs are often program evaluations that intend to improve, judge, or create knowledge. An evaluation, according to Robert Stake (2004), is the pursuit of knowledge about value (p. 16). Few evaluation approaches assert their values as openly as feminist evaluation and while every evaluation approach pursues this knowledge laden with its own, and often implicit, values few come under as heavy criticism as feminist evaluation. Not a feminist? This session explores how feminist evaluation can be useful even for non-feminist evaluators by discussing the key challenges that often prevent the use or consideration of feminist evaluation, providing a practical description of feminist evaluation, and then demonstrating effective use of feminist evaluation by a nonfeminist. Thus this session describes how an evaluator would implement, draw from, or be guided by feminist evaluation, not how a feminist would implement evaluation. This is an important distinction. Keywords: Feminist; Gender; Evaluation; Developing country;

O 024

Objective setting for a new entrant in the gender equality arena: The case of the European Institute for Gender Equality
P. Irving 1, K. Mantouvalou 1
1

GHK Consulting, European Social Policy, London, United Kingdom

How can a new entrant ensure that its activities address the needs of its stakeholders and add value to the work of existing organisations? Using the example of the Second Ex-Ante Evaluation of the European Institute for Gender Equality (EIGE), we will argue that a participatory approach involving the Institutes management team and its stakeholders and a practical evaluation framework facilitate the transfer of the evaluation results into the day to day activities of an organisation. The European Institute for Gender Equality came into being as a policy agency of the European Union in December 2006 and gained its administrative independence in June 2010. The idea of setting up a gender equality institute in Europe was introduced in 1995. Since then, several initiatives were taken to examine the need, potential role and activities of the institute. However, none of the earlier efforts succeeded in establishing the Institute. In view of the delays in both the establishment of EIGE and the launch of its independent operations, the Management Board decided to conduct a second ex-ante evaluation to validate its objectives and activities. From a methodological perspective two principles underpinned the evaluation: a recursive logic model and a participatory approach. The recursive logic model that evolves as the Institute itself evolves aimed to place monitoring (and evaluation) at the heart of EIGEs planning processes. The participatory approach was introduced to ensure that EIGEs staff were actively involved in the evaluation process. As EIGEs management team would be responsible for the implementation of the resulting evaluation framework, it was considered essential that they took ownership of the evaluation results to ensure that they embraced monitoring and evaluation as part of their day to day activities.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

20

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

While developing the Institutes aims, objectives and activities the evaluators faced a number of challenges: Gender equality is a complex and multifaceted policy arena in Europe where at least at first sight there was not a clear role for a new entrant; Even though the Regulation for the establishment of the Institute provided EIGE with a clearly defined role, it was unclear whether this corresponded to the needs of its stakeholders;

S3-01

Most of EIGEs stakeholders recognised the need for its establishment but their expectations varied significantly and in some cases they were contradictory; Since June 2010 EIGE has received numerous ad hoc requests for assistance from its stakeholders putting additional pressure on its human and financial resources. Even though responding to stakeholder needs is important the Institute has a finite budget and there was a clear need to prioritise its activities. The presentation will explore how the evaluators addressed these challenges working closely with the management of the Institute and consulting its stakeholders. The research findings from the Ex-Ante Evaluation will provide the basis for the presentation which will be updated using a series of interviews with the Institutes management team one year on to demonstrate how the findings have influenced the day to day activities of the organisation.

Wednesday, 3 October, 2012

9 : 3 0 1 1 : 0 0

The two a uthors of this paper will giv e a joint pre se nta ti on in the confe re nce . About the presenters: Pat Irving is a Principal Consultant in GHK Consulting having more than 20 years research and evaluation experience gained within an EU and UK context. Over the years Pat has worked on a wide range of studies across the spectrum of employment policy, social inclusion, lifelong learning and gender equality. Pat led GHKs team on the Second ex-ante evaluation of the EIGE and co-authored the reports. Katerina Mantouvalou is a Senior Consultant in GHK Consulting. She holds a PhD in Political Science from University College London and has experience in delivering research and evaluation studies on gender equality and human rights issues for the European Commission and its agencies. Katerina coordinated the Second Ex-Ante Evaluation of EIGE and co-authored the reports. Keywords: Institutional evaluation; Participation; Recursive logic model;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

21

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S4-17 Strand 4

Paper session

Evaluation of humanitarian aid


S4-17
O 026

Real-time Evaluation of humanitarian assistance revisited: Lessons Learned and Way forward
S. Krueger 1, Elias Sagmeister 1
1

goodroot, Berlin, Germany

Wednesday, 3 October, 2012

Real-time Evaluation of humanitarian assistance revisited: Lessons Learned and Way forward

9 : 3 0 1 1 : 0 0

The unintended or unsatisfying effects of large-scale humanitarian responses during the genocide in Rwanda in the 1990s, the South-East Asian Tsunami or the more recent crises in Haiti and the Horn of Africa have led to requests for more timely learning through evaluation in the humanitarian sector. At the same time the professionalization of evaluation amongst humanitarian organizations has led to an increase in evaluation capacity and a number of new or newly termed methods being applied. One concept has seen particularly widespread dissemination and praise since the beginning of the decade and has recntly undergone substantial modifications: Real-Time Evaluations (RTE). It seems clear that for the maximum number of lives to be saved, humanitarian organizations need to be able to learn in a timely manner and improve their complex interventions as they evolve, in real-time. Expectations towards real-time evaluations were thus high, when the concept made its way into humanitarian policy and practice two decades ago. Today, the term RTE is an integral part of evaluation policies and guidelines of most humanitarian and development organizations. Its practical application, however, continues to be characterized by unclear concepts and the use and applicability of Real-Time Evaluations remains unclear. The proposed article provides the first comprehensive review of the current reality of Real-Time Evaluations in humanitarian organizations, drawing on two decades of experience with the method. It analyzes how this method has evolved in practice, how it is understood by experts, how it is used and misused by practitioners and evaluation managers in humanitarian organizations. Analysis includes the timing of Real-Time Evaluations in project cycles, the extent of inclusion of beneficiary perspectives and participation, the composition of evaluation teams as well as stated objectives and scope of RTEs. Based on a sample of approximately 80 evaluation reports, 15 expert interviews as well as practical experience of the researchers from evaluations of humanitarian assistance in Yemen, Somalia and Haiti the article analyzes limitations and potential benefits of the method to assess where its practical value lies and to propose what its future could look like for the evaluation of humanitarian action. The article draws from literature on organizational learning, policy science and utilization of evaluation results to suggest ways of improving the method and its application for humanitarian actors. Recommendations pertain to defining the right context for RTEs, establishing a feasible scope, assembling the right set of actors and ensuring identification of key lessons from past Real-Time Evaluations. Appropriate triggering of Real-Time Evaluations in organizations, feasible organizational settings and follow-up procedures are analyzed to optimize utilization and uptake of evaluation findings while being realistic about their potential to induce change. Finally, the article makes the case for a de-mystification of the real-time approach and a less formalized take on real-time evaluations in humanitarian practice to improve evolving operations, combination with organizational development and thus saving lives more effectively. Keywords: Real time evaluation; Humanitarian; User led evaluation; Real time;

O 027

Learning in real time: Inter-Agency Real Time Evaluation of the Humanitarian Response to Pakistans 2010 Flood crisis
R. Polastro 1
1

Fundacion DARA Internacional, Madrid, Spain

Riccardo Polastro is Head of Evaluation at DARA. He has 19 years of experience in humanitarian affairs and development aid having worked in over than sixty countries. He has carried out single evaluations funded by Danida, DFID, the DG ECHO, EC, IASC, ICRC, Norad, OCHA, UNHCR, UNICEF, UNDP, SDC, SIDA and other organizations. Objective: To share existing good practice in real time evaluation and contribute to enhance users and evaluators capabilities by presenting the mixed methods used as well as the results of the evaluation. Evaluation topic: A real time evaluation is a participatory evaluation that provides immediate feedback during fieldwork. Through its instant input to an ongoing operation it can foster policy, organisational change to increase the effectiveness and efficiency of the overall response. It contributes to improved learning and accountability within the humanitarian system, bridging the gap between conventional monitoring and evaluation, influencing policy and operational decision making in a timely fashion, and identifying and proposing solutions to operational and organisational problems in the midst of major humanitarian responses. Context: The 2010 floods in Pakistan affected 78 out 122 districts in Pakistan and one-tenth of its nearly 200 million population and at one point one- fifth of the country was submerged by flood waters. In response to this disaster, the international community implemented what would become the largest emergency operation ever staged by the humanitarian community.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

22

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

Timing: The evaluation was commissioned by the Inter-Agency Standing Committee (IASC), and undertaken by a team of four evaluators between January and March 2011. Aim of the evaluation: The aim of the evaluation was to provide a snapshot of the current situation in Pakistan, and participatory feedback to those managing and executing the response, with the specific goals of 1) assessing the implementation of the humanitarian response to date and 2) providing real-time feedback to the Humanitarian Country Team, with the aim of Influencing ongoing operational planning, including corrective action where necessary. The evaluation was designed to be participatory, incorporating the insights of a wide range of key stakeholders as well as beneficiaries, and to generate utilization-focused findings and recommendations in order to allow for the maximum impact on the operation of the response. Methodology: The evaluation followed a deductive analysis based on a mixed methods approach for data collection. The evaluation team visited Pakistan twice.

S4-17

Wednesday, 3 October, 2012

9 : 3 0 1 1 : 0 0

The first trip comprised an extended field visit to three of the worst affected Provinces, with semi-structured individual interviews and group interviews with 686 people from the affected population and some 1,107 key stakeholders including representatives from UN agencies, the Red Cross Movement, international and national non-governmental organizations (INGOs), federal and local government, the military and donors. The second trip was undertaken to facilitate four workshops (three provincial and one national) with key stakeholders. Link to the evaluation: http://oneresponse.info/Coordination/IARTE/Lists/Announcements/Attachments/16/IA%20RTE_Pakistan%20Floods_Final%20Report.pdf Keywords: Real-Time Evaluation; Joint/System wide evaluation; Humanitarian Aid; Deductive analysis; Pakistan;

O 028

Learning from complex emergencies: the IASC Evaluation of the Humanitarian Response in South Central Somalia 20052010
R. Polastro 1
1

Fundacion DARA Internacional, Madrid, Spain

Riccardo Polastro is Head of Evaluation at DARA. He has 19 years of experience in humanitarian affairs and development aid having worked in over than sixty countries. He has carried out single evaluations funded by Danida, DFID, the DG ECHO, EC, IASC, ICRC, Norad, OCHA, UNHCR, UNICEF, UNDP, SDC, SIDA and other organizations. Objective: To share existing good practice in humanitarian evaluation and contribute to enhance users and evaluators capabilities by presenting the methods and results of what has been considered the most comprehensive evaluation of aid in Somalia. Aim of the evaluation: Over the last few years the humanitarian environment in Somalia has increasingly deteriorated and concern has been raised about issues of accountability and quality of assistance to Somalia. The extent of the crises, the challenges of delivering assistance and the limited system for monitoring and feedback has further emphasised the need to review quality; impact and accountability of the assistance delivered. 00 The Inter Agency Standing Committee (IASC) for Somalia has therefore initiated an inter-agency evaluation of the collective response in South Central Somalia, to identify best practices and lessons learned from the response to date with the aim of improving continuing and future humanitarian assistance. Objective: The evaluation informed both strategic discussions within the IASC and between the IASC and the donors on the wider humanitarian response and future strategy for aid delivery in Somalia; as well as provided, concrete operational input and guidance to Clusters and individual agencies for their future programming. Context: The humanitarian response was set against the backdrop of a very complex environment as Somalia experienced one of the worlds most protracted emergencies. Limited access and security have hindered the response. Nevertheless, in the period under review, the overall response was successful in key areas: food distributions, health, nutrition, water and sanitation. There were a number of innovative features in the response, especially around remote management. Timing: The evaluation was undertaken between March and November 2011, and was commissioned by the Inter Agency Standing Committee and was funded by four bilateral donors: Danida, DFID, SDC and SIDA. Methodology: Evaluating humanitarian responses provided in uncertain, turbulent, fluid and insecure environments presents challenges beyond those encountered under more stable conditions. This is mainly due to issues of access and security, and the frequent absence of standardised monitoring and comparable datasets on the response to the affected population. To tackle this, the team used various data collection methods through an inclusive and participatory process attempting to get as many stakeholders as possible involved in the evaluation. In total, the evaluation team gathered more than 3,117 pieces of information. This information provided the analytical basis from which to draw conclusions and recommendations. To the extent possible, the evaluators triangulated data and drew on multiple sources to ensure that findings could be generalised and hence were not views of a single agency or a single type of actor. Link to the evaluation: http://www.oecd.org/dataoecd/9/29/49335639.pdf Keywords: Impact evaluation; Joint multi-donor/system wide evaluation; Humanitarian Aid; Deductive analysis; South Central Somalia;
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

23

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 029

Evaluation of Monitoring and Evaluation Wing of Earthquake Reconstruction and Rehabilitation Authority: Achievements and Challenges
T. Murredi 1, G. Mustafa 1
1

Earthquake Reconstruction and Rehabilitation Authority, Monitoring and Evaluation Wing, Islamabad, Pakistan

S4-17

Wednesday, 3 October, 2012

9 : 3 0 1 1 : 0 0

The paper attempts evaluation of the M&E Wing of the Earthquake Reconstruction and Rehabilitation Authority (ERRA), Prime Ministers Secretariat (Public), Islamabad, Pakistan. After the earthquake of October 2005, ERRA was created by the Government of Pakistan for reconstruction and rehabilitation of the earthquake affected areas. ERRA took up the job of reconstruction and rehabilitation with its implementing arms, Provincial Earthquake Reconstruction and Rehabilitation Agency (PERRA), Northwest Frontier Province (Now Khyber Pakhtoonkhwa) and State Reconstruction and Rehabilitation Agency (SERRA) in the Azad Jammu & Kashmir (AJK). It was common perception that the destruction caused by the earthquake was so huge because of, inter alia, the inferior quality of the buildings. Thus, among the core functions of ERRA, monitoring and evaluation was at the heart. Therefore, ERRA established an elaborate system of M&E simultaneously with the planning of the work of reconstruction and rehabilitation. DfID provided Technical Assistance for the establishment of M&E system of ERRA. M&E Wing has been headed by a Director General who reports directly to the CEO of ERRA. Thus, he enjoys an independent status as far as the functions are concerned. Budgetary independence of M&E Wing has also been ensured by ERRA by providing a separate line of budget to it. At the ERRA Headquarter, there is a Deputy Director General (DDG) and a Director. Under them are Social Monitoring Cell (SMC) and Technical Monitoring Cell (TMC). The SMC consists of an Evaluation Section and a data management Section, while the TMC comprises of three sections: Construction Monitoring Section, Bids Evaluation Section and Contractor Facilitation Centre. In the field, there are two Zonal Directors: one for Khyber Pkhtoonkhwa and the other for AJK. There have been 12 Sectors in which ERRA has been doing the job of reconstruction and rehabilitation. These can be grouped into four categories: i) Direct outreach to households and individuals, ii) Social services, iii) Public infrastructure, and iv) Cross-cutting programs. Direct outreach programs include rural housing, social protection and livelihoods; Social services consist of education, health and water and sanitation; Public infrastructure comprises of transport, power, telecommunication and governance, and Cross-cutting programs are disaster risk reduction, environmental safeguards and gender equality. Monitoring and evaluation of all these programs is in the mandate of M&E Wing. The M&E Wing has been endeavoring to perform well. However, given the enormity of the job and geographic expanse of the facilities under implementation, the resources available did not permit to visit each and every facility by M&E staff. Furthermore, in the presence of a full time consulting firm for detailed supervision, it would be wastage of resources. Hence, M&E Wing resorted to sample check. The sample is drawn randomly and the staff is asked to visit the facilities in accordance with a given program. Daily reports reach the zonal offices, which are transmitted to ERRA HQ on monthly basis. Here, these are analyzed and a summary is put up to the CEO. Corrective measures are suggested to all the concerned authorities. Keywords: Monitoring; Evaluation; Social services; Earthquake; Reconstruction and rehabilitation;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

24

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S3-33 Strand 3

Panel

Equity and Ethics


S3-33
O 231

Equity and Ethics


S. D. Tamondong 1
1

AFDB c/o P. Giraud OIVP, Tunis de Belvedere, Tunisia

Wednesday, 3 October, 2012

1 1 : 1 5 1 2 : 4 5

This Panel is composed of the Chair, Susan D. Tamondong, who is also a presentor, two co-presentors, Mohamed Manai (Manager, Operations Evaluation Department, African Development Bank, and Margareta De Goys, (Director, Evaluation OfficeUNIDO), two Discussants Dr. Ray Rist (President, IDEAS) and Dr. Inga-Lill Aronsson (Professor, Uppsala University, Sweden) a Rapporteur, Michele Tarsilla (Western Michigan University) who will summarize the panel session discussions. The Chair is Ms. Tamondong (Vice-President, IDEAS) who will open the session, introduce the Panel, and begin the session with a a power point presentation on Why Evaluate and For Whom? The need for evaluation will be presented with a rationale for doing so- the need to demonstrate the outcomes and impact of institutions: government and private sector programs and projects and their corresponding accountabilities. Interest among public international organizations being mainly for accountability and results; and interest among private companies mainly to identify good markets and create good image for their business. The demand for professional evaluation has increased primarily due to the donors decreasing resources for development assistance, and thus, to highlight their achievements. Attention has also been made to know the impact among the poor and vulnerable such as women, in order to assess effectiveness of development interventions. Given the high demand for evaluation, there is a tendency to forget quality and ethical considerations in evaluation. Thus, the need for evaluation ethics has become even more necessary than before. Questions will be raised: Why do we really evaluate and for whom is it? Can we practice the ethical ways of evaluating despite conflicts of interest? Should the public welfare prevail? An example of a failed development project in the Philippines will be used to demonstrate when an evaluator is faced with the challenge of exposing uncompromising evaluation results or compliance to a higher authority and silence. As E.R. House pointed out, politics can undermine the integrity of evaluation. The five forms of evaluation corruptibility according to authors Worthen, Fitzpatrick and Sanders will be cited in relation to the case example. In addition, the guiding principles developed by AEA in 1995 which contain many elements found in various sets of common ethical guidelines developed around the world will be cited. Questions raised could be answered during the discussion. After the first presentation, two others will follow. The second will be by Mohamed Manai from AfDB, who will talk about Code of Ethics and Protection of the Evaluators Independence in a Multilateral Development Institution. He will discuss about the impartialiality and independence of evaluators in the bank, in order to reduce the potential for conflict of interest and ensure that the ability to provide credible reports and advice is not compromised. His presentation will discuss how the four dimensions of evaluation independence are internalized in the institution and their interrelationship with the Code of Ethics, and how institutional settings can enhance the Evaluators ability to provide credible and uncompromised reports and advice to the banks governing bodies. The third presentor is Margareta De Goys from UNIDO, who will discuss about the main principles of UNEG Ethical Guidelines, the obligations of the evaluators, and obligations towards participants of evaluation, such as confidentiality, respect for dignity and diversity, rights, and avoidance of harm. She will be discussing the experiences in applying them. The Discussants will provide their brief comments after the presentations, and discussion will follow, encouraging lively audience participation. The Panel session will end with a summary of discussions by Michele Tarsilla from Western Michigan University. The Chair will close the session, and thank everybody. Keywords: Impact; Women; Community; Ethical; Development assistance; Accountability;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

25

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S3-23 Strand 3

Panel

Building evaluation capacity through university S3-23 programmes: where are the evaluators of the future?
O 105

Where are the evaluators of the future? Building evaluation capacity through university programmes in a changing world
Wednesday, 3 October, 2012
1 1 : 1 5 1 2 : 4 5
G. Parry-Crooke 1, J. Toulemonde 1
1

London Metropolitan University, Centre for Social and Evaluation Research, London, United Kingdom

Across Europe, post-graduate programmes in evaluation began to emerge at the end of the 1990s to meet the growing demand for teaching and training in a world where evaluation was expanding across counties and sectors. Academic courses tended to communicate the essentials of theory and practice which underpin the core activities of evaluators leading to achieving pre-determined learning outcomes. Evaluation study programmes often aimed for more, setting out to locate evaluation in broader-than-classroom contexts and contribute to building usable evaluation capacity in the real world. By 2010, there were 15 Masters level courses spread across eight countries and a plethora of single (stand-alone or integral to other programmes) modules or short credit bearing units dedicated to evaluation policy and programme, development and research methods for evaluation. Evaluation continues to grow as concern increases about measuring and understanding change as well as demonstrating the cost effectiveness of projects and programmes particularly as funds have become scarcer and demands for transparency continue. But now in 2012, what is happening to the education and training of would-be and existing evaluators, programme managers and commissioners of programmes and evaluations? Despite the continuing need for evaluation education, here too resources in some areas have diminished and post graduate education via universities is changing for at least two principal reasons. First, and in the same way as all sectors, universities are digesting the impact of the financial difficulties facing most European countries, reduced subsidies for education and a reduction in employer sponsorship of individual students to pay for their studies. Second, distance and blended learning models are increasing in popularity. Not only may they be cost-effective, for many students of evaluation they offer ways of attending a course from one session to week long to a full Masters in evaluation while remaining primarily in their own context and for many, continuing in their practitioner roles. A panel of course providers/practitioners will debate continuing and new questions which now face evaluation education and training. For some years, the focus has been on the content of the curriculum. While this still remains at the core, in the current context the panel, with their audience, will address a number of key contemporary questions including: 1. Who are the current audiences for evaluation education and training? 2. How best can evaluation can this be provided? 3. Working in virtual worlds, what ways can be identified to enable cross-boundary teaching and training through distance and blended learning and credit transfer? Beywl, W. (2010) Overview of European University Based Study Programmes in Evaluation Centre for Continuing Education University of Bern. www.kwb.unibe.ch From the European Collaboration on University Study Programmes in Evaluation (USPE) network which aims to strengthening the cooperation between higher education institutions that offer study programmes in evaluation Keywords: Evaluation education and training; Distance learning;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

26

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S5-22 Strand 5

Panel

Identifying and assessing capacity development S5-22 outcomes: perspectives from the EU, UN, OSCE and the Council of Europe
O 032

Wednesday, 3 October, 2012

1 1 : 1 5 1 2 : 4 5

Identifying and assessing capacity development outcomes: perspectives from the EU, UN, OSCE and the Council of Europe
A. Becquart 1, S. Brander 2, J. Uitto 3, A. Costandache 4, T. Fiorilli 5
1 2 3

Council of Europe, Directorate of Internal Oversight, Strasbourg, France OSCE, Vienna, Austria UNDP, New York, USA 4 EuropeAid, Evaluation Unit, Brussels, Belgium 5 Council of Europe, Strasbourg, France

In the post-Busan era, capacity development (CD) is expected to play an even more important role in aid delivery. As the understanding of CD mechanisms grows, CD interventions progressively shift their focus from training and human resources development to paying more attention to system-wide changes and reforms. How has this shift of focus affected the way CD evaluations are conducted? The Panel will be chaired by Ms Aygen Becquart (Head of Evaluation Division Council of Europe). The following subtopics will be addressed by the panellists. Can we use CD outputs as a proxy to CD outcomes? The difficulties evaluators are facing consist mainly in the definition and identification of CD outcomes and the relationship (or lack of) between CD outputs and CD outcomes. What are the boundaries for CD programmes? What are their assumptions? In which context are they realistic? Successful CD is seen more and more as depending on the political, legislative and budgetary framework as well as the genuine commitment of beneficiaries. These preconditions are generally beyond the programme managers control. How should they be considered when evaluating? Challenges related to developing and following-up to evaluation recommendations and/or findings CD interventions are not effective without active participation of the beneficiaries. Ensuring the implementation of recommendations addressed to entities outside the organisation that is managing the evaluation is more challenging, since it is more difficult to ensure follow-up. Panellists: Sonya Brander, Deputy Director, Office of Internal Oversight and Head of Evaluation and Management Audit Services, OSCE. As former Senior Legal Adviser to the OSCE, Mrs Branders legal experience was useful to inform the strategy for establishing evaluation in the OSCE taking into account political realities and legal limits. Mrs Brander will present OSCEs evaluations of police training activities and legislative strengthening, where OSCEs on-going experience in building capacity has been challenging and led to success. Tobia Fiorilli, Senior Evaluator, Directorate of Internal Oversight, Council of Europe. Mr. Fiorilli previously worked in the Bureau of Strategic Planning of UNESCO, in charge of RBM and country programming. Mr. Fiorilli will discuss the challenges of evaluating CD programmes in the domain of Human Rights and Rule of Law (Good governance/anti-corruption, Human Rights training for judges, prosecutors, lawyers and law enforcement officials). Juha I. Uitto, Deputy Director, Evaluation Office, UNDP. Over the past 25 years, he has worked on development programs in international organizations, academia and consulting. Prior to UNDP, he has held evaluation positions in GEF and IFAD. Dr. Uitto emphasizes capacity development as support to national institutions to enhance their sustained abilities. He will draw upon evaluations of UNDP contributions to national capacities. A key ingredient must be understanding national perspectives on how capacities are identified and developed.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

27

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S3-32 Strand 3

Panel

Equality and Equity: Improving the Evaluation S3-32 of Social Programmes


O 033

Equality and Equity: Improving the Evaluation of Social Programmes


B. Sanz 1, K. Hay 2, M. Bustelo 3, S. Batliwala 4, E. Rotondo 5, D. M. Abdelhamid 6, S. Reddy 7, M. Segone 8

Wednesday, 3 October, 2012

1 1 : 1 5 1 2 : 4 5

Chief of Evaluation Office, UN Women and Chair of the United Nations Evaluation Group (UNEG) Senior Specialist, Evaluation, International Development Research Centre, Regional Office for South Asia and China 3 President of EES and Associate Professor at Complutense University, Madrid (UCM) 4 AWID 5 PREVAL 6 MENA Regional Network for Development Evaluation 7 UN Women 8 Evaluation Adviser, UNICEF
2

Rationale: There has been rising call for the integration of equality and equity dimensions in the evaluation of social development programmes. However, they remain missing dimensions in the majority of evaluations of social development programmes. This is partly due to the different understandings of the definition of the two concepts and how they are inter-related, as well as a lack of understanding of how their integration is essential for ensuring quality and credibility. Objective: To explore different understandings of equality and equity and their inter-connections and improve understanding of why their integration in the evaluation of social development programmes is essential for ensuring quality and credibility, specifically by looking at gender, human rights and social justice issues. Narrative & Justification: Social development programmes aim to transform societies by changing existing social structures that are discriminatory. They should have as their overall goal the improvement of equality and equity within a society. Yet, there are differing understandings of the definitions of equality and equity and how the two are interconnected and they are often missing dimensions in evaluations of social programmes. Such evaluations do not adequately assess how equality and equity are affected, nor do they consider carefully how the evaluation itself may affect these issues. The reasons for this are multi-fold and go beyond the issue of definitions, e.g. lack of prioritization, lack of knowledge or understanding, challenges in evaluating these often long-term change processes, the need to adapt and/or develop new approaches, etc. This panel/roundtable proposed to provide a sound basis for evaluators and evaluation commissioners to engage in the discourse on equality and equity and understand the importance of taking up these issues when evaluating social programmes to ensure the quality and credibility of their evaluations. It will also aim to share information on relevant approaches and methods, how they could be applied, and new and emerging issues for further consideration and exploration. By providing a space for discussion on the evaluation of equality and equity in social programmes, the panel/roundtable would advance dialogue on issues such as gender equality, human rights and social justice. Chair: Belen Sanz (UN Women) Panelists: Srilatha Batliwala (AWID) Emma Rotondo (PREVAL) Doha Mounir Abdelhamid (MENA Regional Network for Development Evaluation) Shravanti Reddy (UN Women) Discussants: Maria Bustelo (EES) Katherine Hay (IDRC)

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

28

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S3-10 Strand 3

Paper session

Evaluation use and useability I


S3-10
O 034

Evaluation use: a revised reading of the role of the evaluator, the model and the context
A. Brousselle 1, D. Contandriopoulos 2
1

Wednesday, 3 October, 2012

University of Sherbrooke, Community health sciences, Longueuil (Qubec), Canada University of Montreal, Faculty of nursing, Montreal (Qubec), Canada

1 1 : 1 5 1 2 : 4 5

The use of evaluation results is at the core of evaluation theory and practice. Debates in the literature have emphasized the importance of both the evaluators role and the evaluation process itself in fostering use. Even if evaluation models are based on contrasted epistemological and methodological foundations, fostering use is generally thought to depend on the evaluators abilities to strategically harness the evaluation process and exploit his personal communication qualities. Our presentation gives a completely new reading of the long-standing debate on evaluation use, rebalancing the respective roles of context, theories and evaluator. Methods: A recent systematic review on knowledge exchange and information use (Contandriopoulos et al. 2010) identified context as a determinant of use and proposed a two-dimensional framework to characterize context. We began by positioning selected evaluation models in the two-dimensional framework according to their core components. We then discussed the meaning of these alignments between contextual characteristics and the theories positions in the framework. Results: First, we observe various zones of use, which we call the paradise zone, the lobbying zone and the knowledge-driven swamp. Second, we observe a fit between contextual characteristics and evaluation models core components (Shadish et al. 1991). Each model, according to its defining principle, occupies a different use zone. Because use varies as a function of context characteristics, different models, if applied according to their core components, will naturally lead to different evaluation use. Our analysis shows it would be a mistake to think results use depends primarily on the model used or the evaluators qualities; rather, it is largely influenced by the evaluation context. Furthermore, our analysis suggests that some models are more appropriate in some contexts to foster use. This re-interpetation of use has important consequences for evaluation practice. To maximize impact on use, the evaluator should be able to choose the model best suited to the characteristics of the context, which is not always possible, either because the evaluator has not mastered all the models, or has limited freedom in selecting the evaluation model. Evaluators must recognize the determinant influence of context on use and be willing to practice evaluation in contexts where intrumental use is less probable. Use should not always be given priority if it means setting aside some approaches or evaluation contexts and thereby missing important evaluation results. Keywords: Evaluation use; Evaluation theories; Role of the evaluator;

O 035

Improving performance through public reporting of performance measures: Why and how?
D. Contandriopoulos 1, F. Champagne 2, J. L. Denis 3
1 2 3

University of Montreal, Faculty of Nursing, Montreal, Canada University of Montreal, Health Administration, Montreal, Canada ENAP, Health Administration, Montreal, Canada

In recent decades there has been a growing interest in the design and implementation of systems for public reporting of performance measures in the healthcare sector. Those systems are founded on two complementary logics. The first is anchored in the democratic ideal of citizens involvement in public affairs. According to this view, public sector organizations should be open to public scrutiny of their activity, and making performance measures public contributes to this ideal. The second is much more instrumental in nature, and is predicated on the hypothesis that public reporting of performance measures can be used as a lever to promote quality improvement interventions and ultimately increase performance. In their simplest form, such instrumental interventions follow the market-based logic of competition based on consumers awareness of performance: consumers use publicly released information to modify their behaviour and vote with their feet, thereby penalizing poor performers and rewarding highly performing service delivery structures. However, evidence from large-scale efforts to use public reporting of performance measures as an instrumental performance improvement tool suggest that the causal mechanisms involved are much more complex. First, the necessary conditions for such a market-like logic are stringent and generally not fulfilled in the health care sector. Second, even if market-like use of publicly released performance data is much less promising in practice than in theory, other causal pathways between the public release of performance measures and performance improvement exist and are worth exploring. This is also one of the core conclusions of a recent RAND Europe report on the subject Growing evidence suggests that other user groups, such as managers and providers, indeed use comparative information to improve care where public reporting occurred. It is important to note that information systems can encourage changes in provider behaviour even if the public makes limited use of them. This supports the notion of an association between public reporting and quality improvement, which operates largely through provider behaviour change. More systematic research is needed, however, to understand the underlying mechanisms. (Cacace et al. 2011) This presentation first offers a typology of five different plausible causal pathways linking public reporting of performance measures and performance improvement. This typology rests on a variety of conceptual models and a review of available empirical evidence. We then use this typology to discuss the core elements that need to be taken into account in the design of efforts to use public reporting of performance measures as a performance improvement tool. Finally, we discuss those findings in the larger context of efforts to increase the use of evaluation results. Keywords: Public Reporting of Performance Measures; Typology; Healthcare; Performance improvement;
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

29

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 036

Reflections on symbolic uses of evaluation in the aid sector


J. McNulty 1
1

DFID UK Department for International Development, Evaluation, Glasgow, United Kingdom

S3-10

James McNulty, Evaluation Department, DFID I have worked at DFID the UKs government aid agency since 2002. At time of writing, my role is providing evaluation advice to country office and policy colleagues. Rationale: There is a broad consensus across the aid community that more evaluation is needed to help ensure that aid money is being spent wisely, and on what works. In the UK, this consensus has hardened in recent years partly in response to a marked increase in public and political scrutiny of increases in aid spending.

Wednesday, 3 October, 2012

Significant progress has been made in recent years to improve the quality of evaluation in the aid sector. More recently, that interest in quality has increasingly included an interest in improving the use of evaluations to improve policies and programs.

1 1 : 1 5 1 2 : 4 5

However, some authors point out the use of evaluations as a major input to program and policy decision-making is the exception rather than the rule in the sector. This lack of instrumental use reflects an uncomfortable gap which has emerged between evaluation practice and rhetoric. Some authors even characterise this gap as paradoxical. There is a relative paucity of empirical research on evaluation use in this sector, and the study of symbolic uses of evaluation in particular has been neglected. However, the wider literature on the organisational and cultural determinants of evaluation use suggest that prevailing conversations about evaluation use in the aid sector may be underpinned by an excessively rationalistic understanding of stakeholder rationality. This oral presentation explores themes from this work in a discussion of the idea that conversations and strategies to improve evaluation use must consider and perhaps internalise evidence that the non-instrumental use of evaluation is primarily a cultural and organisational problem rather than a technical one. It explores the idea that critical reflection on the symbolic uses of evaluation as a boundedly rational or strategic behaviour may for this sector and possibly others be a pre-requisite in grounding strategies to improve evaluation use in the reality of how people actually behave. It is hoped that the discussion following the presentation will focus on sharing lessons on improving evaluation use and developing evaluation cultures. Keywords: Evaluation; Use; Symbolic; Instrumental; Bias; Rationality; Resistance; Psychology; Culture; Organisation; Politics

O 037

Between a rock and a hard place: The client-contractor relationship in Public Policy Research
U. Khan 1, M. Gutheil 1
1

Matrix Insight, London, United Kingdom

Government commissions significant levels of external support at every stage of the policy process. From policy reviews, to impact assessments and evaluations, the means by which such work is commissioned and managed is subject to largely standardised rules and processes. Nowhere is this more evident than in the negotiated space of the client-contractor relationship. Starting with Edgar Scheins (1969) publication on Process Consultation: Its role in organisation development, the literature in the field of organisational development has promoted utilising the process-consultation model for the interaction with public sector clients as a more effective way than employing the doctor-patient model. The process-consultation model makes the following main assumptions: The client and consultant jointly diagnose the problem, so that the client learns to see the problem for him/herself, but the client has the major responsibility to develop his/her own solution and action plan to the problem. The client has more knowledge and insight about what will work in the organisation then does the consultant. The consultants role is to train the client in using diagnostic and problem solving techniques. By contrast, the traditionally more common doctor patient model assumes that: The consultant is hired to identify the problem, diagnose it and recommend a solution. The consultant has more expertise regarding the specific problem than does the client, which means that the client will rely almost exclusively on the consultant. The consultant is not expected by management to train the client in diagnostic and problem solving skills. This paper applies these concepts to case studies of evaluation and impact assessment assignments undertaken for the European Commission and European Parliament. Utilising an evidence review and a limited number of stakeholder interviews, the paper will focus in particular on the relevance of the two approaches in light of situational factors, including the broader policy context, as well as the intervention that is being considered and their role in developing productive client relationships. The paper will consider how rule bound procurement processes impact on client-contractor interaction. Tender documents can be seen as a static record of what a consultancy assignment is meant to achieve, with the Terms of Reference (ToR) playing a pivotal role in the procurement process. The paper will seek to understand the different purposes of the ToR and its wider impact on client contractor engagement. In particular, it will critically assess the TORs impact on the ability to operate within a more dynamic policy environment, which may in turn call for a more dynamic interface and interaction with the client. Hence, an in-depth understanding of clients motivations and objectives as well as the broader context within which the client operates, is fundamental to success of such consultancy assignments. It makes it possible to decide which of the above approaches is more suitable, and helps develop a robust research design. The paper will conclude by making a number of suggestions for further research. Keywords: Consulting approach; Client relationship; Research management; Process consultation model;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

30

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S5-02 Strand 5

Paper session

Auditing and evaluation


S5-02
O 038

Policy evaluation in the Flemish government administration: institutionalisation and practice from a risk management perspective
B. De Peuter 1, M. Brans 1
1

KU Leuven Public Management Institute, Leuven, Belgium

Wednesday, 3 October, 2012

1 1 : 1 5 1 2 : 4 5

This paper explores the institutionalisation and practice of evaluation as one of the core tasks of policy work from a risk management perspective, with empirical evidence from a government wide research project on the regional administration in Flanders (Belgium) across 13 policy domains. The practice of evaluation can be judged by different sets of criteria relating to professional policy making. Besides this perspective, an alternative evaluative approach is provided by an audit perspective using a risk framework as the referential point. By defining goals and quality criteria with regard to the process and institutional context of policy evaluation, this approach examines which risks may prevent from obtaining the process related goals and meeting the agreed quality criteria. On the other hand, the measures taken by the responsible actors to avoid or temper the risks in day to day practice are judged upon their efficiency and effectiveness. In this paper we report firstly what comprises the practice of policy evaluation within the Flemish administration as perceived by policy annalists themselves. Secondly, the main related risks are identified, comparing practical experiences to what is put forward in the literature on meta-evaluation. Special attention will be given to the context of interaction with and networks of stakeholders in which policy information is built up and government administrations undertake (ex ante or ex post) evaluation. This paper aims to advance insights by clustering the identified risks but also by identifying strengths and limits of this audit approach to improve the quality of policy evaluation. Keywords: Risk management; Evaluation practice; Institutionalisation; Policy analytical work;

O 039

Enhancing evaluation impact: managing expectations, the case of the Dutch Court of Audit
M. Kort 1, F. B. Van der Meer 1, M. van Twist 1, M. de Wal 2
1 2

Erasmus University Rotterdam, Public Administration, Rotterdam, Netherlands Erasmus University Rotterdam, Rotterdam School of Management, Rotterdam, Netherlands

In many countries we witness organizations that act as independent auditors or evaluators of government policies. The debate on the effectiveness and impact of this audit society is increasing. Their impact can (and must be) improved. In this paper we focus on the question what evaluators actually do to enhance the impact of (the products of) these institutions and how these actions enhance or hinder actual impact. What are conditions and mechanisms that play a role, both in evaluator behaviour and in the reception and impact of their reports? We participated in a number of evaluation projects of the Dutch Court of Audit and studied the formation process of the product, and the interaction with the evaluated and other actors. Research into the impact of evaluation studies has identified various conditions that appear to influence impact. Many of these conditions are in fact aimed at the input or the output of the evaluation itself. For example the source and credibility of evaluation data, the method of communicating the results and the timing of the report (Rist, 1994). Other conditions can be seen as external conditions, like the political climate, the economic situation and whether there is an institutionalized evaluation practice (Leeuw and Rozendal, 1994). The conditions mentioned do not in their selves explain why evaluation results are used in specific instances and not in others. And impact can be measured in various ways, so there is no unambiguous definition of impact (Bekkers et al., 2004). This type of impact research focusses on the relation product (or end result) impact and implies some distance from the researchers towards the research object. In our research we took a different perspective. We used observation and interaction research as methods and took the processes of construction and meaning and behaviour within (and between) organizations as a starting point for assessing the impact of evaluation. From this perspective the intentions, interventions and interpretations of the involved parties are central. Impact can be influenced by the way expectations of these parties are managed. Our research method consisted of observing three research teams of the Dutch Court of Audit during six months in specific occasions in their research projects (for example team meetings, seminars, consultations with members of the Board of the Court of Audit and meetings with representatives of the organization or policy under research). We also participated through reflection on their assumptions and analyses. By doing so we learned to understand their professional dilemmas and complicated deliberations while at the same time we were able to keep enough distance to identify relevant interaction patterns with respect to impact. The paper explores the main findings and reflects on them. The Dutch Court of Audit is an independent institution with constitutional auditing tasks and authority. See related paper abstract Interaction research for enhancing evaluation impact by Van der Meer, Kort and Van Twist. Keywords: Impact; Interaction research; Managing expectations;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

31

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 040

The impact of institutional Evaluation the Case of the Norwegian Office of the Auditor General
K. Reichborn-Kjennerud 1
1

The University of Bergen, Department of Administration and Organization Theory, Bergen, Norway

S5-02

The last 2030 years, with the modernization of the public sector through New Public Management and the subsequent network society, evaluation has become a more prominent feature. State Audit institutions are central in this system and conduct evaluations in an institutional context. These evaluations are called performance audit. In this context of institutional control it is important to ask if performance audit helps to improve the public sector or if it just represents rituals of verification leading to little else than reassurance. To find out how performance audit impacts through the media and the political-institutional system I established a database mapping and linking information from the performance audit reports published and followed up by the Norwegian Office of the Auditor General during 20002011. I also mapped the media coverage in newspapers and reactions from the politicians in Parliament. I then selected 5 performance audit reports from the database to look closer at the political-institutional system and tried to entangle how important the political- and the media attention are to the impact of performance audit. Results show that the political backdrop is important. The impact of performance audit is stronger when the opposition is in majority in Parliament. Many of the questions raised in performance audit have political and value-laden aspects. In audit these are reduced to a question of control, but in the processes and debates, both in the media and in the committee, the value questions come to the forefront of the attention. The committee can both reinforce and diminish the critique of the NOAG. Both the NOAG and the ministries look to the comments in the committee decisions to decide on measures to take. Keywords: Evaluation; Performance audit; Control; Democratic accountability;

Wednesday, 3 October, 2012

1 1 : 1 5 1 2 : 4 5

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

32

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S3-19 Strand 3

Panel

Managing multiple perspectives in judging value S3-19 in a networked evaluation world


O 041

Improving valuation in the public interest: Managing multiple perspectives on judging value in a networked evaluation world
Wednesday, 3 October, 2012
1 1 : 1 5 1 2 : 4 5
G. Julnes 1, M. de Alteriis 2, P. Dahler-Larsen 3, T. Schwandt 4
1 2

University of Baltimore, College of Public Affairs, Baltimore, USA U.S. Government Accountability Office, Washington DC, USA 3 Dept of Political Science University of Southern Denmark., Dept of Political Science and Public Management, Odense, Denmark 4 University of Illinois at Urbana-Champaign, Urbana-Champaign, USA

Most acknowledge that valuing is a defining aspect of evaluation (e.g., concluding one program is better than another or that certain changes would improve a program). However, the current lack of consensus on good methods of valuing is increasingly problematic as pressures from both inside and outside of the field are encouraging evaluators to be more explicit about valuing (Greene, 2011). This panel of four presenters will promote a dialogue on valuing that recognizes the strengths of different approaches in different contexts and supports the goal of developing frameworks appropriate for an increasingly networked world with diverse communities of evaluators (Dahler-Larsen, 2012). The first presentation summarizes how the U.S. Government Accountability Office (GAO), an independent agency of the U.S. Congress whose mission is to oversee the U.S. federal government, selects criteria by which to assess evaluations conducted by U.S. federal government agencies. Common sources of criteria for assessing agency evaluations include: legislative requirements, agency policies on evaluation, professional standards, such as the American Evaluation Associations roadmap, accepted principles of social science research, and key stakeholder views. The research question for the second presentation is: how does it happen that a particular set of values become established or institutionalized as a taken-for-granted framework in a particular evaluative situation? For the case studied, PISA, one can, in the spirit of social constructivism, consider the mechanisms by which PISA has established a set of criteria that even if debated become taken-for-granted in not only international comparisons, but in many debates about schools and education in many contexts on many levels (Brakespear), perhaps with the effect of suggesting that different countries have the same values and goals in education (Meyer), even if teachers in different countries in fact hold very different ideals of education (data will be provided from Ozga et al: Fabricating Quality). The third presentation, building upon the authors longstanding argument that evaluation is a social practice not a set of techniques, explains that the current interest in determining the value of public policies is principally about methods rather than a careful analysis of the discourse of values that shape the practice(s) of evaluation in society. The argument is that current dominant approaches to valuing in evaluation largely ignore the fact that evaluation practice is implicated in (it both constitutes and is constituted by) a wider, global discourse of power and politics that shape the production of what constitutes valuable evaluation knowledge of the benefits and impact of social and educational policies. The field of evaluation must attend more carefully to how it is entangled in this discourse. The fourth presentation reviews points in the prior presentations, arguing that our multiple approaches to valuing in evaluation represent pragmatic tools with varying strengths and limitations according to contexts. As such, promoting effective valuation requires frameworks that can organize our approaches to valuing in ways that highlight primary contextual factors influencing best practices. The framework offered organizes underlying paradigms for valuing and offers implications for applying these paradigms in specific contexts. Keywords: Valuing; Public interest; Valuation;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

33

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-03 Strand 2

Paper session

Comparing and combining evaluative approaches


S2-03
O 042

A marriage of convenience for comprehensive evaluation: Program and Implementation theory-based approach
R. Crespo 1, N. Codern 1, A. Cardona 1
1

AreaQ, Evaluation and Qualitative Research, Barcelona, Spain

Wednesday, 3 October, 2012

1 1 : 1 5 1 2 : 4 5

This presentation is based on several comprehensive evaluations that have been carried out by AreaQ, within different governmental interventions on public health in Spain. It is framed within the Theory-based stakeholder evaluation approach (Hansen & Vedung, 2011) which, in the general context of Theory based evaluation, assumes a constructivist (bottom-up) perspective on building up the Theory of Change of a given intervention. Specifically, the abstract is related to the current discussions about difficulties and potentials of representing the logical framework for complex and complicated interventions. A lot has been written regarding this issue (Rogers, 2000) and we would like to share and discuss a participative strategy for visual representation that, on the basis of distinguishing Program and Implementation theory, allowed us to: Carrie out a participative work with different stakeholders, and build up with them a space of consensus on how the programme is expected to bring about the desired results. In other words, it contributes to clarify and bring consensus to what stakeholders consider to be the Program theory of a given intervention. Work collaboratively with stakeholders on the multiple (and sometimes contradictory) interpretations about the processes and circumstances that must be considered in order to put a given intervention into practice. Hence, this visual representation strategy allowed us to merge this multiple interpretations into one single graphical representation, meanwhile allows us to keep under distinction some premises that cohabitate in tension. In terms of contribution, using this visual representations strategy has strength both comprehensive and learning dimensions in our evaluation practice. On one hand, dealing with both consensus and dissent has made it possible to integrate a common results evaluative view, as well as to incorporate a singular processes evaluative view based on different beliefs and values that deeply affect the way professionals do their job (and programmes achieve their objectives). In doing so, the strategy provides analytical category to nuances and insights that, otherwise, would be much more difficult to include in the evaluation model. On the other hand, merging Program an Implementation Theory contributes to distinguishing and more easily integrating issues related to program context, processes, outputs and desired outcomes in what has been called a comprehensive approach in evaluation. Presentation will provide examples and graphical information related to our learning and experience in developing several comprehensive evaluations on the field of public health. Keywords: Theory based evaluation; Comprehensive evaluation; Health;

O 043

What works and for whom: combining realist evaluation, effectiveness research and epidemiology traditions in human services
M. Kazi 1
1

University at Buffalo (The SUNY), School of Social Work, Buffalo New York, USA

Examples from USA, Finland, England, Scotland and Wales will be used to show how realist evaluation strategies can be applied in the evaluation of 100% natural samples in schools, health, youth justice and other human service agencies. These agencies routinely collect data that is typically not used for evaluation purposes. The 100% evaluation strategy utilizes a new approach to evidence-based practice based on the realist evaluation paradigm, with the central aim of investigating what interventions work and in what circumstances (Kazi, 2003). This approach essentially involves the systematic analysis of data on 1) the client circumstances (e.g. demographic characteristics, contexts and mechanisms); 2) the dosage, duration and frequency of each intervention in relation to each client (the generative mechanisms); and 3) the repeated use of reliable outcome measures with each client. This is a mixed methods approach, combining the traditions of epidemiology and effectiveness research in human services (Videka, 2003) to investigate demi-regularities (Lawson, 1998, in Archer et al., Critical Realism). As the research designs unfold naturally (e.g. a quasi-experimental design comparing those receiving and not receiving an intervention), data analysis methods are applied to investigate the patterns between the client-specific factors, the intervention variables, and the outcomes. For example, the binary logistic regression method identifies patterns in the data where multiple factors are influencing the outcome, and selects the main factor or factors responsible for the outcome, with a prediction of the odds of achieving a given outcome in particular circumstances (Jaccard & Dodge, 2004). This analysis can be repeated at regular intervals and helps agencies to better target their interventions, and to develop new strategies where the interventions are less successful. This data analysis can be repeated at regular intervals and not just at the end of the year. In this way, the evaluator, the government and organizations can work together to evaluate the impact of interventions on the desired outcomes. The paper will use actual data dumps from the agencies management information systems, and show how evaluators can work in partnership with them to regularly use this data for analysis. For example, with school districts the outcomes are school attainment, attendance and discipline; the demographics include race and ethnicity, special education status, lunch status and disabilities. The
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

34

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

interventions include academic services, and school based services from social services, mental health, youth justice, tutoring and athletics, and social services. All of the agencies working with youth can use the school outcomes to investigate the effectiveness of their own services as well as utilize their own outcome measures. The paper will show how to apply realist evaluation and use the data already collected by agencies, undertake live analysis of this data in a suitably de-identified form, and help to investigate where an intervention is more or less likely to be effective in meeting the needs of youth and families. Evaluation itself is valued collectively when applied as part of daily practice, and in a partnership between evaluators and human service agencies, to utilize findings and inform practice on demand.

S2-03

Keywords: Realist evaluation; Contexts; Mechanisms; Epidemiology; Effectiveness;

O 044

Interessement and Enrolment: contributions to the institutionalization of monitoring and evaluation as a reflexive managerial practice.
Wednesday, 3 October, 2012
1 1 : 1 5 1 2 : 4 5
E. Moreira dos Santos 1, M. Vassimon 2, E. Andrade 3, A. Loureiro 2, A. Leal 4, A. Duque 4, T. Coutinho 4, C. L. Cunha 4, M.M. Cruz 4, V. Castro 5
1 2

Oswaldo Cruz Foundation (FIOCRUZ), LASER/ENSP (Regional Endemic Situations Evaluation Laboratory/National School of Public Health), Rio De Janeiro, Brazil Fundacao Roberto Marinho, Canal Futura, Rio De Janeiro, Brazil 3 UFRJ, IESC, Rio De Janeiro, Brazil 4 Oswaldo Cruz Foundation (FIOCRUZ), Regional Endemic Situations Evaluation LaboratoryNational School of Public Health, Rio De Janeiro, Brazil 5 Consultant, Rio De Janeiro, Brazil

Introduction: Implemented since 2009 in one of the Northeast Brazilian State Aco Sade aims to mobilize communities and support integrated, participative monitoring of projects for improvement of maternal and child health through the establishment of a network of local groups called health promotion cells (HPC). The project, a social responsibility commitment of Fundacao Vale, is implemented by Roberto Marinho Fundation through Futura Channel. LASER/ENSP (Regional Endemic Situations Evaluation Laboratory/National School of Public Health) provides both, conceptual and technical support. Methods: To develop the HPC model (intervention) a methodology of problematization based in Paulo?s Freire contribution was mixed with Bruno Latours conceptions of socio-technical network, with emphasis in the translation theory. The program theory emphasizes the importance of building local networks to facilitate and regulate governmental initiatives, specially considering unequal social arrangements and plural local representation. To implement the intervention it has been developed specific educational, material guidelines for both, capacity building and in loci and distance supportive supervision. The construction of the monitoring system was a shared activity with participation of the cells members in all required steps. Results: Evidence from different contexts show that to increase an low institutional shared culture of M&E as a tool for reflexive management and action, power relations, communication infrastructure and connectivity were major determinants. In one hand, major difficulties seem to be related to conflicts associated to disputes over the control of local cells internal functioning and in another hand, to the overlapping between these conflicts and the external context. Experiences of this nature are useful to promote integrated public health policies accounting for several actors interests (translation) strengthening social betterment premises. Lessons Learned: Problematization challenged the hegemonic protagonists such as health professional participants, that is, it allows for the emergence of community leadership. Translation was a facilitator to provide convergence among the different political agendas specifically of those related to group interests with less voice at local settings. References: Latour, Bruno. A esperan?a de Pandora ensaio sobre a realidade dos estudos cientficos. Bauru:EdUSP, 2001. Freire, Paulo Pedagogia do oprimido. Rio de Janeiro: Paz e Terra, 1984 Keywords: Institutionalization; Monitoring; Evaluation; Reflexive; Managerial practice;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

35

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S1-13 Strand 1

Paper session

Network effects on evaluation and organization I


S1-13
O 045

Understanding the evaluation field as a complex network of professionals, relations and influences
M. Rillo Otero 1, M. Barboza 1, A. Bara Bresolin 2
1

Wednesday, 3 October, 2012

Instituto Fonte, Sao Paulo, Brazil Fundacao Ita Social, Sao Paulo, Brazil

1 1 : 1 5 1 2 : 4 5

Martina Rillo Otero, MSc in Experimental psychology from Pontifcia Universidade Catlica of Sao Paulo, Brazil and Madelene Barboza, BSc in International Relations at London School of Economics. Since 2007, associated consultants at the Fonte Institute for Social Development, specialized in process consultancy in evaluation, planning and organizational development in Brazil. Coordinate the Institutes area of evaluation, promoting research and capacity-building. To produce knowledge about the evaluation field is a challenge, given its multidisciplinary and multisectorial characteristics. The more traditional means of analyzing the relevance of individual actors, through their academic and scientific works, has proven insufficient in producing an understanding of the dynamics and articulations between the social practitioners that work in the field of evaluation. Exploring the methodology of Social Network Analysis, this study gained new understandings of the functioning and dynamics of the evaluation field, the knowledge exchange between professionals and the identification of actors who influences others. The methodology, based on the mapping of individuals who work as evaluators and the relations between them, produced results that, among other things, reveal that the relations and influences are established through the interactions in the course of practical work rather than through academic programs or specific courses, reaffirming the multidisciplinary character of the evaluation field. Studying the field of Brazilian evaluation practitioners using the Social Network Analysis, mapping out the size, shape, density of the network, identifying the nature of relations established between the components and groupings, has provided for a new and multidimensional understanding of the dynamics and influences within complex groups. It allowed for the identification of key actors according to levels and kinds of centrality in the network, showing their specific roles as global references, local central points or gatekeepers who connect components and groups. The innovative analysis of the mapping resulted in the identification of a diversity of multiple-level key actors, representing the complexity of the network and as such allowing for new and relevant input in the following qualitative in-depth study. The study was carried out during 20112012 in partnership between the Fonte Institute for Social Development and the Ita Social Foundation. The overall objective of the study was to deepen the understanding of predominant evaluation practices in Brazil, qualities, approaches and principal influences. The chosen strategy was to study the existing practices of evaluators, what they do and how they do it, through initial interviews with 133 evaluators and the identification of a total of 279 individuals who were mapped out in the network of evaluators in Brazil, followed by 15 in-depth interviews with identified key informants. Understanding the specific characteristics of how evaluators interrelate as a network is of great relevance in the multidisciplinary field of evaluation composed of individuals from diverse professional backgrounds. Specific knowledge of the dynamics and influences present in this network will serve as important input in the development of strategies aimed at strengthening the field of evaluation, promoting capacity-building, interchange and learning about evaluation practice. Keywords: Social Network Analysis; Network of evaluators; Evaluation practices;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

36

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 046

Networks are causing effects but how?! Impact evaluation of networks


S. Elbe 1, J. Elbe 1, W. Meyer 2, M. Albrecht 2
1 2

SPRINT Consult, Darmstadt, Germany Saarland University, CEval, Saarbruecken, Germany

S1-13

The discussion on social networks has experienced a growing dynamic in recent years, produced particularly by networks for regional governance. But what are the effects caused by these networks? How can one establish and maintain networks in the most effective way? And, important for the area of public funding, is it possible to promote networks with relatively small financial support for to raise additional investment and therefore increasing the value added? While there is a great number of publications on the operation of networks (governance of networks), little can be found about the effects of networks (governance through networks). This paper discusses three bottlenecks in applying network analysis to networks for regional governance: Firstly, network analysis is in most cases performed statically, i.e. one survey is carried out only once a time. However, compared to organizations, networks are less formalized and therefore clearly more dynamic and by using cross-sectional analysis the most essential aspect for evaluating networks cannot be assessed properly; Secondly, network analysis is too often one-dimensional, i.e. a single organizational member is questioned in behalf of representing the organization as whole denying any kind of differentiation in perspectives and behavior within organizations. Personal relations between actors are, at least, mixed with relations between organizations and the relationship of a single organization with the network as a whole; Thirdly, assessment is mostly limited on communication processes within the network, while impacts of the network as an entity are widely ignored. However, in most cases aspired external effects are the one and only reason why people and organizations engage in the network. The meaning of these aspects for practical network evaluations will be illustrated by two examples, showing the importance of a more reflected way to assess the performance of networks within impact evaluations. Example 1 is a network analysis in 25 so-called bio-energy regions in Germany, supported by state for to improve bio-energy use, to contribute to climate protection and to increase the regional value added. These networks were investigated at the beginning and at the end of the three-year funding phase. Example 2 is a cross-border network to support international labor market policy in the Region of Saar-Lor-Lux Rhineland Palatinate Wallonia. During the last 40 years a network of networks emerged from interregional working institutions, to meet the increasing needs for cooperation across the boarders in the heart of Europe. While some of the actors are members in several single networks, others are only partly representing their own member organization. Hierarchical network analysis was used to cover the different aspects of this regional governance network structure and the interrelationship between networks, organizations, and people. Based on the results presented, the strengths and weaknesses of the used instruments will be discussed and some suggestions for further development of network analysis offered, especially on the performance of network analysis within the framework of impact evaluations. Keywords: Regional governance; Methods; Impact evaluation; Network analysis; Governance through networks;

Wednesday, 3 October, 2012

1 1 : 1 5 1 2 : 4 5

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

37

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 047

Network mapping as a support tool for the Research Council in funding decisions
J. Latikka 1
1

Academy of Finland, Helsinki, Finland

S1-13

The Academy of Finland is the prime funding agency for basic research in Finland. The research funding is allocated on a competitive basis to the best researchers and research teams and to the most promising young researchers. The starting point of the Academys funding decisions is to achieve as high a scientific level as possible. To guarantee this, the Academy uses outside scientific experts in the evaluation of proposals submitted. Most of the proposals are evaluated in panels that evaluate the scientific quality of the proposals and give rating. The ranking of the proposals is made later by the research councils, mainly on the basis of the scientific evaluation. The Academy of Finland received in the September 2011 call altogether 2294 proposals for the following funding instruments: Academy project, Academy research fellow, and Postdoctoral researcher. Of these, 846 were in the fields of Natural Sciences and Engineering. The Research council for Natural Sciences and Engineering is further divided in three drafting groups to make the decision process more manageable. This study concentrates in one of these groups, with altogether 305 proposals (133 Academy projects, 62 Academy research fellows, 110 Postdoctoral researchers). The aim is to test how useful network mapping is for the funding decision makers, i.e. the members of the Research council for natural sciences and engineering. It is quite common that larger research projects apply funding through several forms of funding, sometimes applying even salaries for the same personnel. For example, a person that applies Postdoctoral researcher position for himself / herself, can be included in a Academy project proposal as a researcher whose salary is being asked. As the funding decisions are made in two phases, first the Academy research fellows and Postdoctoral researchers, and then after some time, the Academy projects, it is important to be able to see the big picture of the project portfolio. The links between proposals have been noted and a network map built. Unfortunately, the funding decisions will be made after the deadline for this call of abstract (the result if the mapping exercise was found useful will be seen on May 11). If this abstract is accepted, it will naturally be updated with the result of the study, the usefulness for the research council, with comments. Keywords: Funding decisions; Network mapping;

Wednesday, 3 October, 2012

1 1 : 1 5 1 2 : 4 5

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

38

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S4-15 Strand 4

Paper session

Evaluation of health systems and interventions I


S4-15
O 049

Stakeholders perceptions of the impact of recent reforms in the Czech Republics health care system.
P. Walsh 1
1

Eastern Michigan University, School of Health Sciences, Michigan/Ypsilanti, USA

Wednesday, 3 October, 2012

1 1 : 1 5 1 2 : 4 5

Since the Velvet Revolution, citizens in the Czech Republic have had universal access to free health care up until 2008 when the government imposed user fees (co-pays) as a cost saving measure and to reduce utilization. While in the Czech Republic as a Fulbright Scholar, this author interviewed 18 stakeholders (hospital administrator, physicians, nurses, attorney, regional administrator, consumers, etc.) regarding their perceptions of the Czech health care system and the impact of the recent changes. This was a qualitative study. The goals of this study were to: Obtain an understanding of the overall health care system in the Czech Republic Identify the perceptions of key individuals regarding changes in the Czech Republics health care system Compare and contrast the Czech Republics health care system to the United States health care systems. This paper discusses the evaluation method (interviews using a structured questionnaire), reports the results of the qualitative study, challenges in conducting the research, and compares and contrasts the Czech Republics health care system to the United States system. Keywords: Health policy; Qualitative study; Czech Republic;

O 050

Impact of place of delivery on neonatal mortality in rural Tanzania


J. Ajaari 1, 2, 3, H. Masanja 2, R. Weiner 3, 4, S. A. Abokyi 5, S. Owusu-Agyei 1
1 2

Kintampo Health Research Centre, Kintampo, Ghana Ifakara Health Research and Development Centre, Ifakara, Tanzania 3 University of Witwatersrand, Johannesburg, South Africa 4 Soul City, Johannesburg, South Africa 5 John Snow Research and Training Institute, Accra, Ghana

Introduction: Studies on factors affecting neonatal mortality have rarely considered the impact of place of delivery on neonatal mortality. This study provides epidemiological information regarding the impact of place of delivery on neonatal deaths. Methods: We analyzed data from the Rufiji Health and Demographic Surveillance System (RHDSS) in Tanzania. A total of 5,124 live births and 166 neonatal deaths were recorded from January 2005 to December 2006. The place of delivery was categorized as either in a health facility or outside, and the neonatal mortality rate (NMR) was calculated as the number of neonatal deaths per 1,000 live births. Univariate and multivariate logistic regression models were used to assess the association between neonatal mortality and place of delivery and other maternal risk factors while adjusting for potential confounders. Results: Approximately 67 % (111) of neonatal deaths occurred during the first week of life. There were more neonatal deaths among deliveries outside health facilities (NMR = 43.4/1,000 live births) than among deliveries within health facilities (NMR = 27.0/1,000 live births). The overall NMR was 32.4/1,000 live births. Mothers who delivered outside a health facility experienced 1.85 times higher odds of experiencing neonatal deaths (adjusted OR = 1.85; 95% CI = 1.33, 2.58) than those who delivered in a health facility. Conclusions: Place of delivery is a significant predictor of neonatal mortality. Pregnant women need to be encouraged to deliver at health facilities and this should be done by intensifying education on where to deliver. Infrastructure, such as emergency transport, to facilitate health facility deliveries also requires urgent attention

O 051

A Longitudinal Evaluation of A Resettlement Programme for Adults with Profound Learning Difficulties from Hospital to Community Housing
E. Hogard 1
1

Northern Ontario School of Medicine, Thunder Bay, Canada

This paper describes a three-year longitudinal evaluation of a programme to resettle adults with profound learning difficulties from hospital care to relatively independent living in supported housing. The major objective of the evaluation, commissioned by the Local Authority, was to assess whether the quality of life for service users had improved as a result of being resettled The approach adopted by the evaluators was their Trident method whereby a programme evaluation focuses on the achievement of predicted outcomes; on the process whereby the programme was delivered; and on the views of stakeholders. In this case the main outcome
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

39

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

measure was the quality of life of the residents; the process was that of the procurement approach adopted by the Authority; and stakeholder perspectives were gathered from parents/next of kin, and carers. The views of the key stakeholders, the residents, given their profound learning disability, had to be inferred through the results of the QofL audit. A review of quality of life measures identified best practice internationally as a basis for developing an audit tool. This was developed to measure quality of life in seven domains:

S4-15

Quality and Location of Housing Care Planning and Governance Physical Well Being Social Interaction and Leisure Activities Autonomy and Choice Relationships Psychological Wellbeing

Wednesday, 3 October, 2012

1 1 : 1 5 1 2 : 4 5

The audit tool consisted mainly of statements with which auditors were able to agree or disagree on a five point Likert scale. A marking scheme was devised reflecting the relative emphasis on the seven domains in a model of quality of life. The tool was piloted and training was provided for auditors and their supervisors. The tool was used to measure quality of life for residents first, retrospectively, at Orchard Hill and then at six month, twelve month and eighteen month points in supported accommodation and a residential home in the community. Comparing the baseline results with results at eighteen months showed a highly statistically significant improvement in quality of life overall and in each of the seven domains. While all improvements were statistically significant, the most marked improvements were in Care Planning and Governance, in Autonomy and Choice, and in Quality and Location of Housing. Physical Well Being was maintained from a relatively high baseline as was Psychological Wellbeing. Social Interactions and Relations, whilst showing a significant improvement were, nevertheless, less improved than other domains. Recommendations included that there should be continued monitoring of the effectiveness, impact and acceptability of the relocation including repeating the QofL audit; monitoring provision against known risk factors; developing more refined audit in key areas, particularly Social Interactions and Relationships, and continuing surveys of stakeholder satisfaction. Given the central importance of staff in supported living, it would be worth surveying staff views on their own development and proposing training. Finally the project should be used as a basis for devising sustainable internal monitoring procedures. Funding has been secured from the Local Authority in partnership with the providers, to continue the evaluation research for a further year addressing these recommendations. Keywords: Longitudinal Evaluation; Quality of Life; Profound Learning Disability; Audit tool;

O 052

An evaluation of a managed clinical network for personalitydisorder: breaking new ground or top dressing?
R. Ellis 1, E. Hogard 2
1 2

Buckinghamshire New University, SHEU, CARRICKFERGUS County Antrim, United Kingdom Norrh Ontario School of Medicine, SHEU, Thunder Bay, Canada

Rationale, aims and objectives: This paper describes an evaluation of an innovative managed clinical network in the UK. The purpose of the network was to establish a better-coordinated service for those with personality disorder (PD). The network was evaluated using the Trident approach which focussed evaluation questions and data gathering on the extent to which the network met its stated and implied outcomes; the process whereby it was established and operated; and the views of the various stakeholders involved. Managed clinical networks are briefly reviewed in the context of the conference theme of networking. Evaluation Methods: Methods to gather evaluation data included documentary analysis, the use of specially devised tools to assess partnership, staff development needs, and record keeping, and interviews. Results: While the network had achieved its objectives to establish new operational structures and communication networks and staff showed a high level of commitment, it was unclear whether the network had maintained or improved the clinical service. Record keeping for assessment and clinical intervention was at an early stage and there was a need for more systematic use of assessment instruments and data management systems. The creation of a new category of staff- the Network coordinator- raised problems of delivery and staff development. Conclusions: On the basis of this evaluation and at this stage of one networks development it is concluded that the benefits of a managed clinical network remain theoretical rather than proven. The trident evaluation method developed in the work of the Social and Health Evaluation Unit again proved functional for structuring evaluation questions and data gathering and intelligible to contractors. Keywords: Evaluation; Managed clinical network; Personality disorder; Trident method;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

40

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-32 Strand 2

Panel

Complexity and systems thinking for evaluators


S2-32
O 053

Complexity and systems thinking for evaluators


K. Forss 1, R. Hummelbrunner 2, M. Marra 3, B. Perrin 4, M. Reynolds 5, E. Stern 6
1 2 3

Wednesday, 3 October, 2012

1 3 : 3 0 1 5 : 0 0

Andante, STRANGNAS, Sweden AR Regionalberatung, Graz, Austria University of Salerno, Napoli, Italy 4 Independent Consultant, Paris, France 5 The Open University, London, United Kingdom 6 The University of Bristol, Bristol, United Kingdom

Evaluation and systems are two fields that are rather large and diverse and they have operated virtually independent from each for a long time. However, in recent years there is growing interest among evaluators to apply systems frameworks, thinking or models in their work, which is also spurred by the fact that evaluations and evaluands tend to be (or are seen as) increasingly complex. This session will look at the implications for applying complexity and systems thinking in evaluations from two perspectives, and based on recent publications on the subject: Contributors of the book Evaluating the complex (Forss, Perrin, Marra), all of them experienced evaluators, will first present some of their key messages. They will notably highlight the challenges faced by evaluators when dealing with complex issues, the possibilities and limitations of the evaluative methodological repertoire in coping with these challenges, and new insights from complexity theory for evaluators. Two other panelists (Hummelbrunner, Reynolds), who are more rooted in the systems field, will outline possibilities for applying systems thinking when faced with issues of complexity in evaluation a topic that they have extensively worked on as (evaluation) practitioners and authors of recent books. This application can either be done at the level of principle (generic concepts) or involve the use of specific methods (e.g. modelling). After the initial statements there will be room for discussion among panelists but also involving the audience, to explore the implications further. This exchange could bring added value for operationalizing the application of systems and complexity thinking in evaluations, including exploring the boundaries for their use and outlining some quality criteria in this direction. Panelists: Kim Forss works as an independent evaluation consultant based in Sweden and has co-edited the recently published book Evaluating the Complex. He has been President of the Swedish Evaluation Society and is a Board Member of the European Evaluation Society. Richard Hummelbrunner, senior associate of AR Regionalberatung Graz, Austria with more than 30 years of experience as a consultant/evaluator in the fields of regional and international development. During recent years he has been active in promoting the use of systems thinking in evaluation as a practitioner, trainer and author. Mita Marra is Professor at the University of Salerno in Italy. She co-edited Evaluating the Complex and she has undertaken consulting work for the World Bank and the United Nations on gender and institutional development programmes. Burt Perrin is an independent consultant based in France. He has assisted governments and other organisations internationally with quality assurance, evaluation management and planning, and related services in evaluation and strategic planning. Martin Reynolds, consultant, researcher, and lecturer in Systems Thinking at The Open University, UK, producing distance learning resources for postgraduate programmes on International Development, Environmental Decision Making, and Systems Thinking in Practice. He has published widely in these fields and provides professional development workshop facilitation for systems thinking. Elliot Stern is an evaluation practitioner and researcher based in UK. He edits the journal Evaluation; is visiting Professor at Bristol University and Professor Emeritus at Lancaster University; and is a past President of the EES. He was the team leader for this study. Keywords: Systems thinking; Complexity;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

41

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S1-06 Strand 1

Paper session

M&E systems and real time evaluation I


S1-06
O 054

The Use of Feedback in M&E


J. Heirman 1, A. Ling 1, D. Y. Pinto 1
1

Institute of Development Studies, Brighton, United Kingdom

Wednesday, 3 October, 2012

1 3 : 3 0 1 5 : 0 0

Feedback can be understood as the systematic acquisition of information that permits actors to orient themselves in relation to their environment and guides subsequent actions. Feedback systems involve networks of actors who demand, supply, collect, analyse, and use information about changes which occur in the systems they inhabit. Information filtered through these interactions holds different meanings depending on the relationship of an individual to the information source and their perception of its intended use. Eliciting information from feedback systems enables actors to track outputs and identify which positive and negative outcomes are attributable to an individual actor. However, the specification of what feedback is relevant for different external or internal evaluation purposes can create blind-spots that inhibit learning around how to achieve desired change. It can also play into or influence power relationships between actors in both positive and negative ways. This paper looks at feedback in theory and practice to examine how and to what extent feedback mechanisms can be designed and used more effectively for supporting evaluative practices and facilitating learning. It also examines how feedback intersects with and complements a variety of M&E tools (e.g. contribution analysis; determining attribution; citizen scorecards; etc). The objective is to explore how feedback systems can help evaluators and the networks of actors they support to collect and analyse information that helps them improve their collective performance. Keywords: Feedback; M&E Tools; Networks;

O 055

Monitoring, Evaluation and Information System


G. Garau 1, L. Schirru 2
1 2

Regione Autonoma Sardegna, Sardinia Regional Evaluation Unit, Cagliari, Italy University of Sassari, Social Science and Engineering Technology, Sassari, Italy

Starting from the results pointed out in a previous paper (Garau, Mandras and Schirru (2011)) and moving from the considerations underlined by Mazzeo (2012) on the need to develop monitoring systems and indicators both tied to the programs logic and functional to the evaluation guidelines; the authors propose an analysis of a regional policy (Employability Plan) whose aim is to address the problem of youth unemployment by improving the skilled youths employability. How the effectiveness of this program can be assessed? What information is needed to do it? Are the adopted procedures in the policy implementation useful to achieve the stated goal? These are questions that arise when an evaluation process is planned prior to the policy implementation. Unfortunately, since the evaluation design is often thought only after the policy implementation, the information created during the policy implementation (administrative source) has to be integrated with ad hoc surveys (Trivellato, 2010) to meet the shortage of information needed to any evaluation. In this paper we look at the problem as independent observers without constraints imposed by the customer but just with knowledge purposes and bearing in mind that the evaluation questions have to be define a priori to measure them. The adopted evaluation perspective is to improve the program by assuming that the effects of a policy can be conditioned by the implementation process. This hypothesis has to be taken into account in the efficiency evaluation of this policy. It was decided to use the Statistical Information System (SIS) approach that allows us to identify, in addition to the actors and the systems relations, both the available informations gathered during the policy implementation and those missing but necessary for its evaluation. The first step of SIS-oriented evaluation design of the policy under concern is the requirements analysis. Thus, the starting point is the analysis of the documents related to the policy: laws and regulations that allow us to identify the actors in the system, to map their actions and to build indicators to monitor/evaluate processes. A first analysis of the documentation revealed a lack of crucial information for making a correct assessment of the process. For example, it was not kept track neither of the role of associations in intercepting the firms needs in terms of skilled human resources nor of the role of the job service centres (CSL) in the matching between these needs and reported skills by youth unemployed. Equally, one wonders if the meetings between firms and beneficiaries really occurred in the window ad hoc created by the Regional Labour Agency or they are the result of processes that are not traceable and unrelated to the policy. In other words, the point is the role of institutional communication both in the spread of the instrument and in the effectiveness of the policy. In this paper, in order to outline an evaluation profile of the policy which takes into account both the information maintained and the ratio between information cost and the benefits of its availability for evaluation purposes, the former and other issues are discussed. Keywords: Monitoring; Statistical information system;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

42

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 056

Rapid response evaluation; lessons from real time practice


J. Owen 1
1

The University of Melbourne, Centre for Program Evaluation, Melbourne, Australia

S1-06

Purpose: To outline key features of Rapid Response Evaluation (RRE) and to identify factors that affect the successful implementation of RRE in a range of contexts Argument: As a means of improving the use of evaluative work, theorists have developed a range of working approaches. These require evaluators to work more closely with clients in real time as a means of bridging the gap between the production and use of research based knowledge. This paper summarises the authors experience with RRE in several settings over the past decade and identifies tentative conclusions about the influence of (i) evaluator practice, and (ii) stakeholder and contextual factors that affect the adoption of this approach.We conclude that evaluators should subscribe to the notion of sustained interactivity as the basis for Rapid Response practice. This involves continuous connection between evaluator and client over the life of an evaluation or series of evaluations. Further the evaluator must have a range of conceptual and social skills in order to work interactively and to appreciate the need for different forms of evaluation at different phases of the intervention under review. We argue that some of these skills are not usually associated with traditional evaluation practice that is associated with the determination of program worth. This may mean a dilemma for evaluation purists. However there is a strong argument for evaluators to support policy and program planners from the genesis of new interventions to prevent them from being design or implementation failures, This is in accord with the mission of evaluators to be supporters of good public policy and to increase the impact of evaluation in civil society. Keywords: Rapid Response Evaluation; Real TIme Evaluation; Interactive Strategies; Developmental Evaluation; Utilization;

Wednesday, 3 October, 2012

1 3 : 3 0 1 5 : 0 0

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

43

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-39 Strand 2

Panel

Holding the state to account: using evaluation S3-29 to challenge the theories, understandings and myths underpinning policies and programs
O 057

Wednesday, 3 October, 2012

1 3 : 3 0 1 5 : 0 0

Holding the state to account: using evaluation to challenge the theories, understandings and myths underpinning policies and programs
B. Sanz 1,R. Sudarshan 2 N. S. Sabharwal 3, Y. Atmavilas 4
1 2 3

UN Women, Evaluation Office, New York, USA Institute of Social Studies Trust, Research, New Delhi, India Institute for Dalit Studies, New Delhi, India 4 Administrative Staff College of India, Hyderabad, India

This session explores efforts to seek state accountability to uphold womens human and citizenship rights. Taking examples of impunity for sexual violence, denial of rights for Dalit women, and inadequate recognition of women farmers, the panel explores how evaluation can challenge mainstream rights and justice discourse and the way that discourse is articulated in interventions. The panel asks who does the existing landscape of evaluation serve? In line with the conference theme, the panel explores the rise of social media and new ways of mobilizing groups around findings to hold the state accountable. Navsharan Singh explores innovations in feminist legal standards regarding prosecutions of sexual violence in international tribunals. Until the 1990s sexual violence in war was largely invisible. States did not recognise these crimes, there was no name beyond shame, or humiliation, for the violence that women endured during wars, and no framework to address it. Sexual violence has now been brought into mainstream international jurisprudence and being reconceptualised as a form of torture. Drawing on examples from South Asia, Singh describes how this work and effort is, and should be, evaluated as a mechanism for holding the state accountable. Nidhi Sadana Sabharwal highlights challenges facing Dalit women in India. Sabharwal argues that mainstream evaluation in India focuses on gender discrimination and issues of economic, educational and political empowerment; ignoring the complex realities of caste and untouchability-based discrimination, resulting in denial of rights of Dalit women. She presents evidence from studies and evaluations in India that highlight the denial of womens rights due to caste and untouchability-based discrimination. Sabharwal proposes a combination of strategies to uphold and to evaluate programs and policies intending to protect and enhance womens human and citizenship rights. Yamini Atmavilas argues that evaluation related to marginalized groups is preoccupied with influencing policy outcomes in favour of better allocations and entitlements. However, she argues that achieving policy influence is an uneven process, given that the policy space is complex and diverse. In such terrain, the role of evaluation in highlighting issues of excluded groups is invaluable. One such group is the woman farmer in India. While nearly 80% of all working women are in agriculture, women are seldom included in policy-makers view of the farmer. Atmavilas reviews how agricultural programmes take women into account. She develops a matrix of approaches, methods, and theories used to position the woman-farmer, identifying exclusions and inclusions in the process. She suggests more inclusive approaches for evaluating such initiatives. Keywords: State accountability; Womens rights and citizenship; Impunity; Marginalized groups;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

44

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S1-04 Strand 1

Paper session

ICT systems for evaluation quality and use


S1-04
O 058

Bridging the gap between policy formulation and policy implementation: a RD&IT evaluation experience
F. Mazzeo Rinaldi 1, A. Spano 2
1

Wednesday, 3 October, 2012

University of Catania, Catania, Italy University of Cagliari, Cagliari, Italy

1 3 : 3 0 1 5 : 0 0

Research, development and innovation technology (RD&IT) have been, over the last few decades, a strategic issue in Western countries industrial policies debate. Considerable public resources have been invested and continue to be in policies aimed at increasing private companies propensity to invest in R&DIT, in order to strengthen competitiveness and, broadly speaking, economic development. RD&IT is one of the major strategic areas in the EU cohesion policy for the 20072013 programming period, with over 65 billion budget, and remains central in the debate on post-2013 cohesion policy, already under way. Due to the increase of both strategic importance and the amount of public financial resources invested, there has been a growing demand to evaluate RD&IT policies outcomes, usually focused on a few variables: human capital creation, number of patents pending, productivity, increase in sales, etc. Less attention has been paid to assessing the relationship between strategic RD&IT political choices and the policy outcomes. As a consequence, there has been little accountability for the long term strategic impacts of these policies. In particular, there is a growing attention to investigate the actual role of policy makers in the administrative decision making process, in a context that has been always characterized by multilevel governance system. In this context, using a qualitative methodology, the paper presents a research undertaken in the Italian region of Sardinia to evaluate the effects of a regional policy, developed during 19942006 to foster private companies propensity to invest in RD&IT. In particular, the paper reports a comparative analysis of the results of several interviews carried out to privileged witnesses, who have held important roles at both political and implementation levels, and the findings from the evaluation of the main regional RD&IT programs. The paper provide interesting results as regards the way the political decisions were taken, the way in which they were translated into policy instruments, and then interpreted and implemented by public agencies, thus providing important information for future political decision regarding policies to support RD&IT. Keywords: Research; Development and innovation technology; Regional policy; Policy implementation;

O 059

Sophistication, rigor, and attribution in evaluation: Has advanced information technology anything to do with proof of attribution?
R. Santos 1
1

Workland M&E Institute Inc., Quezon City, Philippines

Demonstrating attribution is a great challenge to evaluators when conducting evaluation. Evaluators normally focus on adding rigor to research in a desire to increase the robustness of their works. The prevailing notion is that by using sophisticated research techniques, rigorous research methodology can be attained and therefore attribution will be addressed. But how rigorous really are these techniques? Does sophistication in the use of top-of-the-line tech tools mean rigor? Will this rigor lead to compelling proof of attribution? The paper examines the issue of attribution and the role of technology in development evaluation. It explores the methodological and theoretical merits of rigorous research techniques that use advanced information technologies. It will link attribution with other aspects of evidence-based evaluation, such as validity, robustness, and reliability. Contribution of technology in these aspects will be analyzed. The presentation will draw from analysis of various works of evaluation and an investigation of how the evaluation community addresses attribution question. It will raise methodological and theoretical questions related to rigor, simultaneously deriving insights from the critical analysis of various concepts that are tied in with attribution. It will illustrate the emerging development in evaluation research and guide the audience to a walk through of innovations in research methodologies and the application of adaptive systems based on advanced information technology. Further, it will probe on the soundness of the theoretical bases on which these methodologies are premised. The discourse will challenge the conventional wisdom that tends to equate sophisticated research tools with the ultimate approach in making causal explanation. The research basis of the presentation primarily applied desk research that included reviews of evaluation research practices gleaned from official publications and documents, evaluation reports, website and online publications and interviews of evaluation practitioners. It argued that there is an inclination towards sophistication that is not entirely contributing to the establishment of strong causal linkage between intervention and its impacts. Insights on the evaluators approach to addressing attribution and the research methods used were inferred from the analysis made. The presentation will contribute to an understanding of how technology is influencing the way attribution and causal linkage issues are addressed by evaluators and how it shapes the development of the body of knowledge that builds upon the emerging practice of evaluation. It may identify areas where improvement of evaluation work can be done. Keywords: Adaptive systems; Attribution; Research tools; Rigor; Sophistication;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

45

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 048

Capitalizing on video technology to improve evaluation of complex processes


R. Renger 1, B. Gabriele 2
1 2

University of Arizona College of Public Health, Tucson Arizona USA, USA Das Land Rheinland-Pfalz, Ludwigshafen, Germany

S1-04

Wednesday, 3 October, 2012

1 3 : 3 0 1 5 : 0 0

Program process complexity can vary from predefined, recipe-like tasks to a fluid, idiosyncratic process that cannot be defined apriori. The literature is abundant with process methods for task-level evaluation. However, there is a lacuna of methods for the latter. A school-based home economics curricula was evaluated that contained both task-level (i.e., following a recipe) and complex level (i.e., tailored student-teacher interactions) processes. The evaluation of task-level processes was relatively straight forward and was aided by the addition of written protocols which were used to develop checklists and student feedback forms. Several options and associated limitations for conducting the evaluation of a process complex will be discussed. The final solution was to employ a small, inconspicuous, teacher-mounted camera to videotape student-teacher interactions. The camera proved extremely useful in providing the teacher with information upon which to make changes to her process f engaging students. It was also learned that analysis of the complex teacher-student interactions was only possible with an explicit understanding of the underlying program theory. Several unanticipated benefits of using a teacher-mounted camera will also be discussed. Keywords: Process; Evaluation; Complex; Video;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

46

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S3-20 Strand 3

Panel

The impact of ethics: are code of conducts S3-20 in evaluation networks necessary?
O 061

The impact of ethics Are code of conducts in evaluation networks necessary?


Wednesday, 3 October, 2012
1 3 : 3 0 1 5 : 0 0
W. Meyer 1
1

Saarland University, CEval, Saarbrcken, Germany

One of the main characteristics of the networked society is its increasing complexity. While still most evaluations are commissioned by a single ministry, state organization, private association, foundation or company, the number of joint evaluations, network evaluations, or other forms of evaluations with several contracting authorities is increasing. Moreover, the amount of stakeholders in public programs is also growing and so is their demand for participating both in conducting the evaluation as well as in making use out of its results. Further changes are, at least partly, stimulated by the European Union and international law. Tender procedures, for example, are regulated for ensuring fair competition within the European Community and give evaluators new options on the international market but it also leads to more competitors on national markets and a swelling bureaucracy for tenders. Some principals use their strengthened market position for new excessive expectations during the application process, discriminating especially small and medium sized and primarily locally acting evaluation companies. These tendencies toward internationalization and complexity definitely increase the potential for conflicts due to the unbalanced market as well as to the growing number and diversity of stakeholders. As a result, the existing standards, principles, ethical codes etc. in European Evaluation Societies have to face new challenges. Are they (still) effective and what do we know about these effects at all? Do we need additional ethical principles? Do we need code of conducts on the European and/or the national level for regulating evaluation in networked societies? And finally: are Evaluation societies as professional associations able to guarantee positive impacts of codes of conducts on the Evaluation market in future or who has to do it with which instruments instead? This panel discussion tries to continue the exchange on ethical issues started at the last EES-conference in Prague. While the focus was then set on independence of evaluators and evaluations, the impact of ethical codes of conduct in the framework of evaluation networks will be discussed this year. The invited experts from different countries and various organizational frameworks will share their knowledge on codes of conducts as well as on evaluation networks. Additionally the public will be able to ask questions and bring in their own experiences into the discussion. As a result the question should be answered whether the European Evaluation Society should support the development of code of conducts within national evaluation societies or, even more, should start to develop their own code of conduct and some mechanism to observe or control the adherence of ethical principles. As Presenters and Discussants will be invited: Ilpo LAITINEN (City of Helsinki); Peter LOEWE (UNIDO, UNEG); Helen SIMONS (University of Southampton); Nicoletta STAME (University of Rome); Reinhard STOCKMANN (CEval, Saarland University); Wolfgang MEYER (CEval, Saarland University, Moderation); Daniel TSYGANKOV (Russia); Maria BUSTELO (EES, Spain). Keywords: Ethics; Code of conduct; Impact of ethical codes and standards;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

47

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-01 Strand 2

Paper session

Approaches to evaluating research


S2-01
O 062

Use of Quasi-Experimental Methods for Evaluation of an International Research Fellowship


J. Tsapogas 1, A. Martinez 2
1

Wednesday, 3 October, 2012

National Science Foundation, Washington DC, USA Abt Associates, Cambridge MA, USA

1 3 : 3 0 1 5 : 0 0

The importance of international collaborations cannot be understated. Nations throughout the world agree that in order to maintain leadership in science and engineering research and development it is essential that researchers engage with colleagues around the world. Promoting international engagement at all levels is crucial to fostering successful research partnerships and developing the next generation of S&E researchers. The U.S. National Science Foundations (NSF) International Research Fellowship Program (IRFP) is unique in its emphasis on providing finacial support to postdoctoral scientists for research experiences abroad lasting in most instances up to two years.The IRFP evaluation specifically examined the following 4 questions: Does the extent to which former IRFP fellows engage in international collaborations differ from those of unfunded applicants? Do IRFP fellows post-award career activities and job characteristics differ from those of unfunded applicants? What are the perceived outcomes of program participation? Do the outcomes of program participation extend beyond the direct participants? The evaluation provides evidence that there are statistically significant and positive differences between fellows and unfunded applicants on the number of international postdoctoral fellowships held. IRFP fellows also had a larger number and percentage of publications with a foreign co-authors compared to unfunded applicants. Most importantly, this international focus did not come at the expense of research productivity or professional advancement. Fellows and their unfunded peers were equally likely to hold multiple postdoctoral appointments, were equally productive researchers, were equally likely to hold a faculty rank of assistant, associate, or full professor, and were equally likely to be tenured. Career outcomes of IRFP fellows and applicants overall compare well against PhD holders in Science and Engineering fields on employment, publications, and international collaborations suggesting that the IRFP program attracts a talented pool of applicants. The target population for the study included all individuals who applied to the IRFP prpgram from its inception in 1992 through 2009, as well as non-U.S. research scientists who served as foreign hosts during this period. Data for the evaluation was drawn from multiple sources. Data used in this evaluation came from NSFs administrative records on applicants, and from surveys of fellows, unfunded applicants, and host scientists. A secondary set of comparative analyses between IRFP applicants (and fellows) and a nationally representative sample of Science and Engineering doctorates from the U.S. Survey of Doctoral Recipients (SDR) was used to situate the outcomes of IRFP program participants and applicants within the national context. The evaluation employed quasi-experimental impact analyses to compare the outcomes of fellows to unfunded applicants, using pre-award characteristics of applicants to mitigate the potential threat of selection bias. To reduce the risks associated with selection bias, the study incorporated propensity score analysis (PSA) to construct groups of awardees to non-awardees that were statistically similar across a number of pre-existing characteristics. Keywords: Science; Fellowships; Post-doctoral; Evaluation; Research;

O 064

Evaluating large research infrastructure by asking six simple questions


F. Ohler 1
1

Technopolis Group Austria, Vienna, Austria

Large projects tend to be evaluated by complex evaluation systems. Investment in research infrastructures (RI) is a prominent case. The EU within its Research Infrastructures programme has earmarked 1.8 bn EUR. European Member States have earmarked substantial shares of the Structural Funds for RI, which exceeds the Commissions spending by an order of magnitude. The Czech Republic on its own has allocated 1.6 bn EUR via its Operational Programme R&D for Innovation. Most policy makers consider RI a pre-condition for research. The Czech Republic has turned this rationale upside down. It did not start with collecting claims for RI but with a request for a comprehensive research agenda. Related evaluation thus focused on the attractiveness of the proposed research agenda. The second question was whether the key staff is credible to implement the agenda. A third question related to target groups and their accessibility: Who is interested in what you are doing?. As RI are rather complex in terms of size, involved actors, financial implications, staff, management was considered critical. Due to its critical role human resource policy has been put a fifth major criterion. Finally, financial aspects have been addressed, separating budgeting from funding. While budgeting addresses the use of money (buildings, equipment, salaries, etc.), funding concerns the respective sources (institutional funding, grants, contracts). The main question thus was whether a given piece of equipment was sufficiently justified by the research agenda. Based on these six fundamental questions, applicants had to address them in their applications, they had to tell their story, evaluators had to review the story. The main advantage of this approach was the simplicity and related transparency of the agenda at any stage and for any actor involved: applicants, evaluators, funders. Further, it allowed to connect evaluation with follow-up negotiation of specific issues related to the six core components. The very fact of aligning research agenda with claimed equipment allowed cuts of funding claims by 200 MEUR.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

48

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

As the system of evaluation in terms of criteria was rather simple, the process was simple, likewise: (i) an evaluation by national experts securing compatibility with legislation and existence of and relevance for national users, (ii) an evaluation by international experts focusing on research agenda, key staff, key equipment, and user groups, (iii) a consensus process and (iv) a negotiation phase. The Czech case has the potential to serve as a role model for evaluating complex projects and to relate it to result-based funding. The paper will provide both a review of practical experiences as well as an assessment of transferability to other contexts.

S2-01

Keywords: Research infrastructure; Research policy; Structural funds; Performance contracts; Evaluation of research infrastructure; Management of research infrastructure;

O 065

Dealing with grand challenges in assessing environmental research


A. Ricci 1, E. Amanatidou 2, E. Kalpazidou Schmidt 3, K. Helming 4

Wednesday, 3 October, 2012

1 3 : 3 0 1 5 : 0 0

Institute of Studies for the Inegration of Systems, Rome, Italy Manchester Institute of Innovation Research, Manchester, United Kingdom 3 Danish Centre for Studies in Research and Research Policy, Aarhus University, Aarhus, Denmark 4 Leibniz-Centre for Agricultural Landscape Research, Muencheberg, Germany
2

Authors: Effie Amanatidou is a research associate at the University of Manchester, Institute of Innovation Research. Research interests include research and innovation policy analysis, and foresight. Evanthia Kalpazidou Schmidt is research director at the Danish Centre for Studies in Research and Research Policy, Aarhus University, Denmark. Her main fields of interest are research policy and governance, evaluation and impact assessment. Andrea Ricci is Vice President of ISIS, the Institute of Studies for the Integration of Systems Rome. His research interests include policy impact assessment, sustainable urbanisation, and foresight. Katharina Helming is senior researcher at Leibniz-Centre for Agricultural Landscape Research, Germany. Her research interest is on methods and tools for policy impact assessment in the area of land use and agriculture Rationale / Objectives / Narrative: The new strategic orientation in EU policy towards dealing with grand challenges places several challenges on policy making as well as evaluation practices. The nature of grand challenges, especially those related to the environment, cuts across several boundaries be it in scientific disciplines, governance levels, stakeholders types, policy domains and business sectors. They call for inclusive and boundary breaking approaches in policy making while also leading to far reaching impacts in very diverse fields and user groups in todays networked societies. These features put serious challenges in evaluation and impact assessment. To name but a few the need arises to consider various perspectives, deploy a variety of methods, combine information (both qualitative and quantitative) from heterogeneous sources, try to identify abstract and often long term impacts like safeguarding public interests and quality of life. Based on the critical issues posed by the grand challenges orientation, the paper reports on a framework created for evaluation and impact assessment specifically for environmental research. It is based on a two-dimensional structure combining a user oriented assessment approach (dimension one) with a leitbild oriented approach (dimension two). This framework was tested in a recent assessment experience, namely the work done by the stock-taking expert group set up by the European Commission with the aim to assess the impacts of past and present environmental research in the light of designing the following framework programme i.e. Horizon 2020. For this study the two assessment dimensions were populated as follows: three user categories for dimension one: research community, business, policy and society at large; three leitbild categories for dimension two: addressing grand societal challenges as defined in the European strategy Europe 2020; strengthening scientific and technological excellence and advancing the European Research Area; and establishing European added value. Results show that the two dimensional analytical framework proved useful and operational. Limitations were set by the availability of data and missing indicators. Overall, the paper aspires to start and contribute to a fruitful dialogue about improved evaluation practices in fulfilling assessment tasks in the light of designing research and innovation policy responses to environment-related grand challenges facing todays network societies. Keywords: User oriented assessment approach; Grand challenges; Impact assessment; Environmental research; Evaluation methods and practices;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

49

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S5-04 Strand 5

Paper session

Evaluation and governance I


S5-04
O 066

Toward institutionalization and utilization of the national evaluation system in the social policy sector in Mexico.
G. Perez Yarahuan 1
1

Universidad Iberoamericana, Social and Political Science, Mexico. D.F., Mexico

Wednesday, 3 October, 2012

1 3 : 3 0 1 5 : 0 0

In the past decade, Mexican society and government have engaged in efforts to create a diverse set of institutions for government transparency, accountability and evaluation in the hopes that these will lead to increases in the quality of its democracy and of the performance of its public sector. What, if any, have been the effects of theses institutions? In order to answer this question, this paper reviews the development of the Mexican government evaluation system. The research uses a sample of 9 social development programs to investigate on the impacts of evaluation studies on their operation. The analysis is performed in order to see if and how the evaluation studies, carried out by independent, civil society or research institutions, have been used by public officials in order to change programs? operational rules. In order to explain results of the uses of evaluation of federal government programs, the research also uses a survey of evaluation use applied to public officials in different agencies of the federal government. Keywords: Utilization; Accountability; Social Programs; Developing Countries; Mexico;

O 067

The evaluation process of goverment actions in Cote dIvoire


K. S. A. Kouakou 1, M. Coulibaly 2, F. Soumahoro 2
1 2

Ministry of Agriculture, Directorate of Evaluation and Projects Control, Abidjan, Cote d?Ivoire Ministry of Higher Education, Directorate of Monitoring and Evaluation, Abidjan, Cote d?Ivoire

For the coordination, monitoring and evaluation of the governmental actions, the Government of Cte dIvoire through the Ministry of State, Ministry of Planning and Development (MEMPD) has developed several tools among which the Governmental Actions Matrix (GAM). Implemented since 2002, the GAM is a logical framework that synthesizes the actions/activities financed by bilateral Agencies and by the Government in order to assess objectively the various aspects (resource flows, terms implementation/process, outcomes and impacts) of a project/program or strategies development. For a results-based management purpose, this tool help to monitor, measure and evaluate progress of the priority actions programmed for the current year. It ensures a consistent monitoring and an effective governmental action in order to improve policies formulation and development strategies. With the advent of the Strategic Document for Poverty Reduction (PRSP) and ahead of the National Development Plan (NDP: 20122015) declined in Governmental Work Plan (GWP: Annual), the dynamics of adaptation to best practices in Evaluation, led in 2008 to the elaboration of the Governmental Actions Evaluation Booklet (GAEB). This new dynamic instrument of social dialogue and inter-ministerial, now measured the performance of the State action, while remaining faithful to the classic analysis of the annual review of overall performance of government action. It integrates a folder called Crossing Viewpoints which is another opportunity for sharing ideas for the future and evaluative studies on emerging themes concerning nations sustainable development. The GAEB annual framework is based on four steps: (i) Elaboration / Validation of sectorial matrix, (ii) mid-term Review, (iii) Annual Review and (iv) Drafting the Annual Report. Keywords: Evaluation; Government action; Results-based management; Best practices;

O 068

Evidence based-policy and evaluation of primary health services. The case of the Insurance Fund IKA-ETEAM/Greece
K. Dimoulas 1
1

Panteion University, Social Policy, ATHENS, Greece

The new governance approach based on networks and open consultation with diverse stakeholders emphasise the levelling of decision-making hierarchies. In this context the evidence-based policy-making has to correspond to diversified demands, at least for reasons of documentation and legitimacy. Thus, evaluators are pressed to apply multiple research methods and techniques, in order to combine direct, robust evaluation outcomes with the increasing demand for responsive public services. Following this direction evaluation research acts as a stimulus to dialogue and reflection assisting the rebalance of power between diverse stakeholders. From this point of view the traditional structural power of the social partners is at risk in certain fields of social policy such as social insurance provisions and health services. In order for the unions to offset this trend they many times, adopt an evidence-based social dialogue approach.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

50

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

The paper presented here refers to the evaluation of the primary health services the insurance fund IKA-ETEAM provides to its insurers, that is employees in the private sector. IKA-ETEAM, financed from tripartite contributions, provides health insurance to more than 5,5 million people in Greece and until the end of 2011 was providing health services to its insurers. Since the eve of 2012 these services merged with the primary health services of the insurance funds for agrarians, for self-employed and for public servants creating EOPYY (National Organization for Health Provision).

S5-04

In front of these challenges, the Labour Institute of the General Confederation of Greek Workers via its Observatory for the Study of Social and Economic Conditions in Greece financed an evaluation- research project focused to the assessment of the current and structural characteristics of health provision from IKA-ETEAM. The aim of the project was to support trade-unions to formulate an evidence-based argument and to elaborate their policy proposals for the development of the Primary Health Services of EOPYY. The project conducted in 2011 and was based on data gathered from the annually published administrative reports which categorized in time-series for a period of ten years and then valorised for the construction of certain, even though limited, performance indicators. This project demonstrates that the evidence-based approach to public-consultation, in a context of absent reliable statistics, can be supported from creative use of administrative records and performance reports. It also empowers the trade-unions ability to negotiate in the era of network consultationand strengthens the argument that quantitative data, can be valorized, not only for authoritative control, but also for the empowerement of non governmental organisations, depenting on the context of their use. Keywords: Evidence based social dialogue; Open consultation; Assesment of primary health services; Indicators based on administrative reports;

Wednesday, 3 October, 2012

1 3 : 3 0 1 5 : 0 0

O 069

A Model for Financial, Economic and Risk Appraisal of Public-Private Partnership Projects
M. Uzunkaya 1
1

Middle East Technical University, Department of Business Administration, Ankara, Turkey

This paper develops a model for financial, economic and risk appraisal of Public-Private Partnership (PPP) projects. Given the ever increasing demand for better infrastructure and decreasing capacity of public sector budgets, PPPs have become a popular alternative method in developing countries to finance and operate infrastructure projects. While PPPs offer promising results from both financing and efficiency viewpoints, they are also subject to a multiplicity of risks, materialization of which can have detrimental effects not only on the realization of project benefits, but also on public budgets due to contingent liabilities. It is imperative, therefore, that PPP projects be evaluated ex-ante with up-most care from both economic, financial and risk perspectives so as to ensure securing both public and private interests and to warrant bankability. Economic analysis is important particularly from the public sector viewpoint, which is more interested in socio-economic and environmental benefits and costs, while financial appraisal is critical for the private sector, which aims to reap financial benefits from the project. Risk analysis is important for both sides; since materialization of the risks can jeopardize both economic and financial benefits. The spreadsheet model developed uses the discounted cash flow technique and calculates deterministic and probabilistic decision parameters from financial and economic perspectives for a hypothetical bridge project. The model calculates the decision parameters for different stakeholders viewpoints, such as government, equity and debt. Financial cash flows are converted into economic cash flows within the model using conversion factors, which are endogenously calculated. The decision parameters include deterministic values and probabilistic distributions of Financial NPV (total investment), Financial IRR (total investment), Financial NPV (equity), Financial IRR (equity), Financial NPV (government), Financial IRR (government), Economic NPV, Economic IRR, Discounted Payback Period, Debt Service Coverage Ratio, Externalities and Distribution of Externalities. The model is constructed in a way to accommodate Monte Carlo simulation, which is shown to provide useful insights for risk analysis and project decision parameters. Governments pursuing ambitious PPP programs and private parties aiming at profitable PPPs should take into account the dynamics of different viewpoints in any PPP arrangement for successful PPPs that secure public and private interests. The proposed model has the potential to achieve this goal. Keywords: Project Appraisal; Financial and Economic Appraisal; Risk Analysis; Public-Private Partnerships; Private Finance in Infrastructure;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

51

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S1-26 Strand 1

Panel

International Organization for Collaborative S1-26 Outcome Management (IOCOM) The value and contribution to evaluation in the networked society
O 070

Wednesday, 3 October, 2012

1 3 : 3 0 1 5 : 0 0

International Organization for Collaborative Outcome Management (IOCOM) The value and contribution to evaluation in the networked society
S. Premakanthan 1
1

Symbiotic International Consulting Services (SICS), Ottawa Ontario, Canada

IOCOM is a web-based organization of professionals, academia and an alliance of international and national organizations (associations, societies and networks engaged in the discipline of outcome management and development). The aim of IOCOM is to invite professionals and academia to form a forum for the exchange of useful and high quality theories, methodologies and effective practice in outcome management and development. IOCOM invites everyone interested in outcome management and development to make use of our resources, to participate in our initiatives and to contribute to our goals as an individual and or through outcome and development management organizations. We offer global linkages to outcome and development management professionals and organizations, news of events and important initiatives, and opportunities to exchange ideas, practices, and insights with peers and associations, societies and networks throughout the world. The initiative was launched on January 28, 2010 to support the needs of a group of South Asian participants attending the 2009 International Program for Development Evaluation Training (IPDET) hosted by the World Bank and Carleton University in Ottawa Canada. Since the launch it has grown to a world-wide organization with over 300 members from 50 plus countries. In a globally networked society, IOCOM plays a significant role in providing connectivity of professionals engaged in various disciplines with a common goal of understanding and measuring the worth of outcomes (results). The membership is free of charge and the network provides various benefits, from free access to resources and fora for sharing of ideas. The IOCOM governance structure provides a voice for members in every country. The web based network is a cost-effective way to facilitate the growth and recognition of evaluators throughout the world. There is immense evaluation power to be harnessed by the evaluation community. IOCOM will continue to strive to contribute to evaluation power in a networked society. Keywords: Ootcome; Connectivity; Evaluation Power;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

52

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-11 Strand 2

Paper session

Evaluation in government and organizations


S2-11
O 071

Monitoring, evaluation and performance measurement in the administrations of Obama and Bush: what can Europe and the wider world learn?
E. Georgieva 1
1

European Commission, DG Enlargement, 1040 Brussels, Belgium

Wednesday, 3 October, 2012

1 3 : 3 0 1 5 : 0 0

Short bio: I am currently working as an Evaluation Officer at the EU Commission, DG Enlargement. I am task manager for a number of evaluation assignments, including country programme evaluations, multi-country thematic evaluations and internal evaluations. I am a graduate of the University of Maastricht (MA in European Public Affairs) and I am currently pursuing an MA in Development and Governance from the Centre Europen de Recherches Internationales et Stratgiques, Brussels. Rationale: The U.S. government has a long history of concern with accountability and oversight which has continued and has further evolved during the administrations of Bush and Obama. Europe, on the other hand, is increasingly concerned with results and performance measurement since relatively recently. Therefore, it would be useful to look at what systems have been put in place by the current and last US administrations in terms of performance measurement, in particular monitoring and evaluation (M&E) arrangements, compare them with the systems prevailing in Europe (and other regions of the world) and analyse whether Europe can draw inspiration, experience and lessons learnt from the US experience. Objectives: To add to the debate on evaluation (and monitoring) in governments by analysing the monitoring and evaluation, and more broadly performance measurement, systems in the US Government and arrive at possible lessons and spillovers for European Governments. Brief narrative and justification: In his inaugural speech, President Obama stated as one of his administrations priorities the ambition to improve government performance. He pointed out that only programmes which work and deliver should continue while those which dont should end. In this way he intended to restore the trust between American people and their government. Similar efforts are being undertaken by governments across Europe and the around the world, which have realised that in todays networked societies in which information is more easily available than ever, there is an increased demand by citizens to know what their governments are spending taxpayers money on and how successful the programmes/policies are. In that context, the need to have reliable performance measurement systems has become pertinent. An important part of such systems are the M&E arrangements put in place by the government to gather and analyse data about results and impacts of their interventions. The chosen topic addresses the overarching theme of the Conference and is of relevance to the evaluation community as the paper will explore whether innovative approaches in M&E in the USA, such as the use of new technologies or original legal frameworks, can bring valuable experience tto Europe and other parts of the world. The added value of the paper will be to provide the (European) readers and experts in M&E with insights about US approaches to performance measurement. The EES Conference presents itself as a valuable opportunity to exchange views among the participants about the different M&E systems on the Continent and across the Ocean. Keywords: Performance; Measurement; Monitoring; Evaluation; USA;

O 073

An evaluation framework to deal with organizational constraints in the evaluation of community based multi-level intervention approaches in The Netherlands
M. Van Koperen 1, C. Renders 1, J. Schuit 2, J. Seidell 1
1 2

VU University, Health Sciences, Amsterdam, Netherlands RIVM, Public Health, Bilthoven, Netherlands

Introduction: Worldwide the prevalence of overweight and obesity is increasing. Most promising in tackling obesity and promoting healthy behavior is a community based multi-level intervention approach. An increasing number of municipalities in the Netherlands are developing and implementing such an approach. It is obvious evaluation is such a complex approach is of utmost importance for program improvement and sustainability. However, in addition to methodological problems, organizational and contextual problems also challenge a thorough and successful evaluation. For example there has to be dealt with: available knowledge on the evaluation of multiple interventions in multiple settings directed at multiple target groups; local capacity to plan and organize the evaluation; demands of funders and stakeholders; budget and time; evaluation culture; the dynamics and flexibility in development and implementation of these approaches. A systematic planning tool might help program management to deal with these obstacles and build local evaluation capacity. The purpose of this study is to find an existing planning tool or evaluation framework to support the evaluation of community based multi-level intervention approaches in The Netherlands.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

53

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-11

Methods: A literature search was carried out in PubMed, on internet to collect studies, literature and guide books in which planning tools and evaluation frameworks have been described. Inclusion criteria are: a systematic description of the evaluation planning process of a complex intervention, a scientific basis, and the applicability of the planning tool or evaluation framework to various health behaviors. Mentioned organizational obstacles and constraints of the evaluation of community based multi-level intervention approach are transformed into criteria where a supportive planning tool or an evaluation framework should pay attention to. Examples of criteria are: resource allocation, evaluation culture, evaluation capacity building, process and effect evaluation, use of a program theory etc. The planning tools and evaluation frameworks are then valued by two researchers following the determined set of obstacles and constraints criteria. Preliminary Results: Nine evaluation planning tools and frameworks have been included. The characteristics of these planning tools and frameworks and the scores on the obstacles and constraints criteria, will be presented at the conference. Discussion: The most appropriate framework to support the evaluation of community based multi-level intervention approaches will be presented at the conference. In a next step this planning tool or evaluation framework will be translated, maybe expanded and pilot tested by Dutch evaluation experts, program managers and stakeholders. Subsequently it will be tested on utility and feasibility for evaluating a community based multi-level intervention approach in the Netherlands. Keywords: Evaluation framework; Health; Multi-level; Community-based;

Wednesday, 3 October, 2012

1 3 : 3 0 1 5 : 0 0

O 074

Technological innovations providing affordable services and reliable information in a developing economy
R. Shori 1
1

Evaluation Society of Kenya, Nairobi, Kenya

In Kenya the advent of mobile telephony has had a major impact on the lives of all its citizens. Not only is the mobile telephone the easiest and most affordable mode of communication, it is providing essential financial service through the M-PESA facility and current market information using Mobile Phone Short Messaging Service (SMS) to Rural farmers. Banking facilities can be scarce in some geographic regions, particularly in rural areas and many people without bank accounts find it difficult or expensive to transfer money through traditional banking services. Carrying cash instead of being able to affect electronic transfer is also a security risk. M-PESA translated means mobile money is a mobile money transfer service launched in Kenya in March 2007. M-PESA allows mobile phone subscribers to transmit as little as Sh50 ($US dollar = 82 Kenya Shillings) in seconds, was a first in the world and is now being emulated by countries like South Africa, India and Afghanistan which have launched similar money transfer services. In Kenya, the value of M-PESA transactions since 2007 to March topped Sh828 billion or half of the countrys GDP and last years worth of business reached Sh47 billion. Currently, it boasts of over 14 million customers and about 28,000 agent outlets across the country. Similarly, small scale farmers who constitute 70 percent of the Agricultural sector in Kenya can now access reliable and timely marketing information 24 hours all week through SMS for a small fee on their mobiles. This service is facilitated by the Kenya Agricultural Commodity Exchange Limited (KACE), a private sector firm together with the mobile service providers like Safaricom and Zain. Farmers, agribusinesses and other interested users who are mobile phone network subscribers download KACE market information as SMS messages. KACE has developed marketing information and linkage system (MILS) designed to facilitate competitive and efficient trade in agricultural commodities and services in Kenya, with the aim and potential for scaling out in the East African Community region. Through MILS, KACE collects, updates, analyses and provides reliable and timely marketing information and intelligence on a wide range of crop and livestock commodities, targeting actors in commodity value chains, with particular attention to smallholder farmers and small scale agribusinesses. The information includes daily wholesale buying prices for various crop and livestock products in selected main markets in the country, as well as commodity offers to sell and bids to buy. This initiative is an example of harnessing the power and advantages of modern ICTs for information collection, processing and delivery. The paper attempts to provide information as to the utility of the mobile technology, one of the biggest and influential players in the current economy of Kenya by the looking at M-PESA and the SMS service provided by KACE. Keywords: Technological Innovations M-PESA Mobile Technology;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

54

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S4-24 Strand 4

Paper session

Monitoring, ongoing and ex-post evaluation


S4-24
O 075

Lessons from Indian Flagship Programmes: The Disconnect for Evaluation Framework
I. C. Awasthi 1
1

Institute of Applied Manpower Research, New Delhi, India

Wednesday, 3 October, 2012

1 3 : 3 0 1 5 : 0 0

The paper critically examines the Management Information System (MIS) in thirteen Indian flagship programmes that are under operation for at least a few years in some cases, and a few decades in others. Yet the MIS for these programmes has not reached maturity and, more so, as many as five programmes do not have any MIS worth the name. In this work, we have conducted an objective investigation by evaluating the official websites of Indian flagship programmes. This paper examines the efficacy of the management information system in flagship programmes and checks how credible MIS can become as an effective tool for management. The MIS is critical for generating information and conducting result-based monitoring and evaluation. This paper argues that MIS is still in its infancy stage and do not adhere to the principles of MIS in most of the flagship programmes. There is large number of centrally sponsored and central sector schemes being implemented through the different Ministries across the country. With enormous diversity in the implementation hierarchy across space, it is all the more important to have information about the physical and financial details of a project or programme in order to monitor the progress. Government has realized the need for output and outcome monitoring of plan schemes. Massive public investments has been made and without any credible MIS and monitoring system in p l a c e the efficacy and effectiveness of these programmes will remain largely unknown Clearly, there is disconnect between the information gathering, monitoring and measuring impacts. The paper argues that there is a need for monitoring and evaluation framework in place in every programme that eventually aids to improve the design and delivery of projects, programmes and policies. There ought to be visibly clear linkages between evaluation findings and resource allocation in order to narrow down the hiatus between the intents and outcomes of the programmes. It is of paramount importance to institutionalise a credible MIS in every major project or programme with a detailed conceptual framework. Massive public investments are being made on development programmes and obviously governments and other stakeholders want to know how well and to what extent the delivery mechanism is achieving the desired goals or intents of policies. There is, therefore, a need for an M&E framework in place in every project or programme that eventually aids to improve the design and delivery of projects, programmes and policies and it must move beyond an emphasis on inputs and outputs to a greater focus on outcomes and impacts or results A well designed MIS facilitates flow of information among various levels and enables setting up of the necessary feedback mechanism for planning and management of a programme, project or policy. MIS communicates the involved relationship between budgets, activities, outputs, and progress and impact indicators. It is an approach towards organisational and management solution and challenges posed by internal situations and the external environment.

O 077

On-going evaluation, a tool to steer the evaluation process


M. Van Soetendael 1, J. Tvrdonova 1, J. Wimmer 1
1

The Helpdesk of the European Evaluation Network, Brussels, Belgium

According the Article 86 of the European Council Regulation 1698/2005, within the programing period of 20072013 the EU Member States are required to set up and run the system of the on-going evaluation for each rural development programme. The programme Managing Authorities and Monitoring Committees shall use this system for examining the progress of programme implementation, improving its quality and preparing the mid-term and ex-post evaluation. They shall regularly report about the progress achieved in the development and implementation of the on-going evaluation in the form of a chapter in the Annual Progress Reports (APRs). Objective of this paper is to highlight the role of the on-going evaluation in steering and implementation of the evaluation of rural development programmes in the Member States and to provide the state of play in practical application of the on-going evaluation across the European Union. Using several sources (Annual Progress Reports of Rural Development Programmes, interviews conducted with representatives of Managing Authorities and evaluators etc.) the paper presents an overview of approaches in on-going evaluation providing answers to the following questions: How is the on-going evaluation organized in the Member States, e.g. it is implemented by the Managing Authority or outsourced to external evaluator, what are roles and responsibilities of evaluation stakeholders and how they are coordinated in conducting evaluation tasks etc., What are the tasks and practical challenges in implementing the on-going evaluation, e.g. what resources are allocated to conduct these tasks, how the monitoring, data collection/management is organised in line with evaluation methods, what is the role of capacity building, how the interaction between evaluator and programme delivery is organised, what is the role of networking etc., What are the main achievements of the on-going evaluation, e.g. has the monitoring, data collection and evaluation methods been improved, have the conducted studies enhanced the evaluation results, how these results have been used in policy design, has the evaluation culture improved, has the communication among evaluation stakeholders been strengthened etc.
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

55

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 078

Development of a measurable, reportable, and verifiable system for nationally appropriate mitigation actions in Indonesia
H. Umi 1
1

UNICEF, Social Policy Monitoring, Jayapura, Indonesia

S4-24

In 2009 at the G-20 meeting in Pittsburgh and at COP15 in Copenhagen, the President of the Republic of Indonesia committed to achieve a target of 26 % reduction in carbon emissions from the Business As Usual scenario by 2020. Further emissions reductions of 41 % are expected with international support. With regard to this commitment, Indonesia is presently preparing to implement measures towards actions limiting carbon emission growth and in relation with sustainable development. In 2010, at COP-16 in Mexico, the Parties further decided that internationally appropriate mitigation actions (NAMAs) will be measured, reported and verified (MRV) domestically and will be subject to international measurement, reporting and verification while domestically supported mitigation actions will be measured, reported and verified domestically. Indonesia now continues its efforts to implement its target with the development of national policy framework on climate change which included the initial National Action Plan on Climate Change, National Development Planning: Indonesias Response to Climate Changes, 2008 and the Indonesia Climate Change Sectoral Roadmap (ICCSR) 2010. The Government of Indonesia is familiar with and experienced in the use of Monitoring and Evaluation (M&E) systems and this knowledge will form the basis in the development of an MRV system. However, there were some aspects of currently used M&E systems that will need additional attention to meet the requirement for an effective MRV system: 1. Institutional arrangements 2. M&E systems standardization 3. Frequency of reporting 4. Data and information utilization 5. Coordination issues 6. Lack of alignment and harmonization of the many regulations that refer to monitoring, evaluation and reporting, and 7. Lack of M&E capacity among staff in some offices. A study conducted by the author found gaps and lack of compliance related to M&E key principles, including horizontal and vertical flow of information, need for information at each level, responsibility at each level, monitoring system strategy that include data collection and analysis plan, pretesting or piloting data collection instruments and procedures Meanwhile, the gap linked to reporting key principles mostly related with who will receive what information, in what format, when, who will prepare and deliver the information. Gaps for verification also related to who will conduct the verification, the responsibility and task, verification format, and scope of verification. The development of standardized MRV indicators is critical and has been the major challenge so far. The degree of detail of indicators will differ between unilateral, supported and credited NAMAs and this degree will burden the further MRV implementation. Despite of the positive progress, several aspects still flagged questions mainly related: 1. Clarity of mandate/responsibility for related institutions at national and sub-national level 2. National, sub-national, and sector operational guideline 3. Technical assistance/capacity building, and 4. Time Frame to accomplish several activities related with MRV system development. The study proposes three functional elements or boards at the national, province and district/city level. It also identifies a framework for a possible MRV system, outlining effective and transparent flow of data and information that follows established monitoring and reporting principles, including verification capability. Keywords: Nationally Appropriate Mitigation Actions (NAMAs); Measurable Reportable and Verifiable (MRV) System; Monitoring and Evaluation (M&E) Systems; Inventory; National Action Plan for Reducing Greenhouse Gas Emissions (RAN-GRK);

Wednesday, 3 October, 2012

1 3 : 3 0 1 5 : 0 0

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

56

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S1-20 Strand 1

Panel

The international evaluation partnership initiative


S1-20
O 079

The international evaluation partnership initiative


M. Saunders 1, IOCE Board members 1
1

IOCE, LANCAster, United Kingdom

Wednesday, 3 October, 2012

1 5 : 1 5 1 6 : 4 5

EvalPartners: a proposal for a panel discussion from the IOCE Board The panel with introduce and discuss the international evaluation partnership initiative (EvalPartners) which is being developed by the International Organization for Cooperation in Evaluation (IOCE) and UNICEF, in partnership with the Government of Finland. The partnership is intended to enhance the capacities of Civil Society Organizations (CSO) to influence policy makers, public opinion and other key stakeholders so that public policies are based on evidence, and incorporate considerations of equity and effectiveness. The objective of the Initiative is to enhance the capacities of CSOs to engage in a strategic and meaningful manner in national evaluation processes, and to influence country-led evaluation systems. The expected outcomes of this initiative are three-fold: A. Voluntary Organizations of Professional Evaluators (VOPEs) have a strengthened Institutional capacity B. VOPEs are able to play a strategic role within their countries, contributing to country-led evaluation systems and policies, including by having better access to support by regional and international networks/associations (including IOCE and the more developed VOPEs) and institutions (including, inter alia, UNICEF), sharing lessons learned of similar experiences in other countries, and peer to peer mutual support. C. VOPE members have stronger evaluation capacities, including by attending live webinars with international keynote speakers; e-learning programmes; mentoring programmes; and training organized by local institutions and more developed VOPEs. A growing international evaluation community In the last decades, Civil Society Organizations have been playing increasingly central and active roles in promoting greater accountability for public actions through evaluation. National and regional VOPEs grew from 15 in the 90s to more than 120 nowadays. There is tremendous scope for exchanges of home-grown and country-driven solutions, ideas and experience to support capacity development in evaluation. The panel will discuss the initiative and invite contributions from the floor on 1. The concept of EvalPartners and the context of evaluations role in civil society 2. Ways in which participation might take place 3. What activities might be part of EvalPartners 4. Future steps These issues will be debated by members of the IOCE Board (Murray Saunders), representatives from UNICEF (Marco Segone) and from the Finnish Government and by participants from a regional VOPE (Maria Bustelo from the EES) Keywords: Initiative civil society capacity development;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

57

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S5-26 Strand 5

Panel

Payment by Results: What results? How should future S5-26 evaluations of such approaches be undertaken?
O 080

Payment by Results: What results? How should future evaluations of such approaches be undertaken?
Wednesday, 3 October, 2012
1 5 : 1 5 1 6 : 4 5
B. Perrin 1, A. Henttinen 2, N. Stame 3, H. E. Lundgren 4
1 2

independent consultant, Vissec, France UK Dept. for International Development (DFID), Human Development Evaluation Adviser, New Delhi, India 3 University of Roma, Sapienza, Rome, Italy 4 Organisation for Economic Cooperation and Development (OECD), Development Centre DAC Evaluation Network, Paris, France

There is considerable international interest in Payment by Results (PBR) mechanisms, also variously referred to as results-based (or performance-based) aid (or financing), cash on delivery, paying for performance, or contracting for delivery of services. This approach is part of a wider UK government agenda being piloted by various government departments, and is closely linked to establishing value for money of expenditures on development aid. This approach is also advocated by other respected international organisations such as the World Bank and the Center for Global Development. A key element of PBR involves paying for verifiable results achieved, to partner governments or service providers, rather than payment for inputs. This is based upon a viewpoint that most development aid is tied to expenditures and activities undertaken, with limited accountability for the achievement of demonstrable results and benefits. Proponents argue that linking payments to verifiable results achieved can both provide for more innovation and ensure that actual benefits are achieved. Yet, to date, there have been limited evaluations of such approaches. Without meaningful evaluation, there is a danger of PBR approaches in the future being driven by ideology rather than on evidence of effectiveness. Thus DFID has commissioned Burt Perrin to undertake a study to: identify completed evaluations of PBR approaches and provide a synthesis of the evidence base; to carry out a critique of the quality of these existing evaluations including the methods used, and to provide recommendations for approaches and methods to future evaluations of PBR programmes. This session, closely linked to the conference theme and objectives, will provide an opportunity to share the findings of this recently completed study with the international evaluation community, and in particular to provide for input about what forms of evaluation approaches would be most appropriate. Given the timing of this work, contributions of participants may be used to inform future approaches to evaluation of such aid instruments. Anna Henttinen will act as Chair and will provide the background to this work, indicating why there is increasing interest in PBR approaches, a synopsis of current activities, and why DFID commissioned this work. Anna is with DFID. Burt Perrin will present the findings of this study, in particular highlighting implications for the types of evaluations required and pose questions for consideration. Burt is an independent consultant, providing guidance and quality assurance about evaluation methodology to international organisations, governments, and NGOs worldwide. He is past Secretary General of the European Evaluation Society and past Vice President of the International Organisation for Cooperation in Evaluation. This will be followed by critical reflections from two distingshed discussants: Nicoletta Stame, Professor at the University of Roma Sapienza, past President of the European Evaluation Society and of the Italian Evaluation Association. Nicoletta has written about evaluation methodology with particular reference to theory based and impact evaluation approaches. Hans E. Lundgren, Manager of the DAC Evaluation Network, OECD. Thi s wil l be fol lowed by a udienc e dis c us s ion a nd de ba te . Keywords: Payment by results; Development aid; Results-oriented approaches; Results-based aid;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

58

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S1-08 Strand 1

Paper session

Evaluation for improved governance S1-08 and management I


O 081

From CONTROLAnd to EVALand How to present an evaluation framework by using a metaphor on travel
Wednesday, 3 October, 2012
1 5 : 1 5 1 6 : 4 5
A. Marjnovity 1
1

National Development Agency, Coordination Managing Authority Unit for Evaluation, Budapest, Hungary

Networked society is hungry for new information in every second. News has to be informative, prompt, fast accessible, however, easy to digest at the same time. There is no time and interest for complex and academic way of communication. One of the many ways to wrap up information for drawing attention is using metaphors. Metaphors are understandable for one community which are connected with the same cultural and social, etc. background. It will be presented in the latter, how can be presented a less exciting topic in a quickly understandable and easy-to-remember way by using a simple metaphor on travel. National Development Agency For example: Reality NDA Evaluation Unit Evaluator Company who won a lot Users of the evaluations
(Policy makers, development policy experts, stakeholders, etc.)

(NDA) in Hungary has launched an evaluation framework this year. Evaluations focus on programmes

Metaphor Travel Agency Travel Agent Travellers, Tourists Travel terms and conditions Journey Photo slide show after coming back home

Terms of Reference, aim of evaluation Evaluation Dissemination of findings, conclusions and recommendations

Objective of the Travel Agency: To transport tourists and travellers from CONTROLand to EVALand. Travel Agencys mission is emphasizing for travellers: how colourful the world is, where evaluations show a lot of effects of a measure which has been taken for improving the situation of an economic or social phenomenon. Meeting requirements according to the letters of law or financial regulations is only one side of the story. Control supervises the process whether money spending meets regulations or not. Evaluations say more than this. Travel Agency hires therefore travel agents in order to travel tourists from CONTROLand to EVALand. Travel Agents consist of two groups. Task of Group 1: defining the method of transport. Task of Group 2: offering different types of travels... With detailed continuation of this metaphor it could be seen how a well-used and visualized instrument makes any information remarkable. Even in our recent networked society where information has to catch the eyes and mind at the same time. Dealing with EU funds; responsible for operational programmes planning; monitoring; calls for applications; operation of institution system. Keywords: Evaluation framework; Managing evaluations;

O 082

How to learn the public sector to innovate


F. Hansson 1, M. T. Norn 2, T. Vad 2
1 2

Copenhagen Business School, Department of management politics and philosophy, Copenhagen, Denmark DAMVAD, Copenhagen, Denmark

The public sectors in most countries are under pressure to produce more for less money. The demands for efficiency have grown as a result of the recent financial crises but efficiency was already on the agenda years ago when new public management set up new agendas for the role and functions of the public sector. After some years with more traditional approaches to modernization and efficiency focus came on how to foster innovation in the sector. The use of research to design programs to be implemented from outside came early but without marked success. Next step was to use research in an on-going scale or process, more or less integrated part of the process of innovation process in the sector through on going collaboration between researchers and public administrators. In other fields outside the public sector new and not so linear models for organizing the innovation processes has demonstrated their usefulness, models like user driven innovation, and open innovation has redefined the actors and their roles in the innovation process. In the public sector we have rather limited knowledge on how to design and implement the use of research to design and support the innovation processes and defines new roles for actors. It is
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

59

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

also an area where the need for rethinking evaluation is necessary. Most of the research done to improve the public sector has as most public programmes been evaluated but in relation to the individual program and the results. The long term perspective on learning by evaluation taking into account the long term implementing process of research results in the modernization process is limited, especially in relation to the need for how to organize an on-going integration of research in order to foster innovation processes in the public sector. Drawing on a case, an evaluation of the Programme on Research for Innovation and Renewal in the Public Sector (abbreviated FIFOS) under the Research Council of Norway the paper will discuss how to extract important knowledge on how a research programme should be formulated, established and implemented to be able to have an impact on the public innovation research agenda and on public innovation issues in general. The paper will use the study of this case to argue for the need for discussions of new roles for evaluations of public programmes, roles where evaluation is much more integrated in the programs. The programme in this paper, which ran from 2002 until 2008, was born out of the Norwegian governments resolve from the mid-1990s onwards to establish the worlds most competent public sector, created and run by the best educated public service employees, through innovation and renewal. Moreover, the government aimed to make Norway one of the top five exporters of expertise in advanced public services, administration and democracy.

S1-08

Wednesday, 3 October, 2012

1 5 : 1 5 1 6 : 4 5

As such, the results of the FIFOS evaluation point to several key challenges for policymakers seeking to stimulate public sector innovation. The results of this evaluation are therefore relevant for policymakers in other programmes and countries seeking to promote (research on?) public sector innovation, because the Research Council of Norway and the FIFOS programme were frontrunners in Europe in their efforts to stimulate renewal and innovation in the public sector. Policymakers elsewhere are increasingly turning their attention to (and investing funds in) public sector innovation; as such, the results of the evaluation of FIFOS can yield valuable insights and lessons that can inform the design, implementation and evaluation of current and future programmes to support (research on?) public sector innovation. Keywords: Policy implementation; Use of research; Learning by evaluation;

O 083

Back-Seat Driving: How to Improve Usability and Methodological Rigor in Evaluations


L. Tagle 1, G. Moro 2
1 2

Evaluation Unit of Regione Puglia, Bari, Italy Evaluation Unit of Regione Puglia, University of Bari, Bari, Italy

The Evaluation Unit of the Puglia Region in Italy: coordinates the Regions Evaluation Plan, which includes the evaluations of interventions financed by national funds, by the ESF, and by the ERDF; directly conducts evaluation and studies preliminary to evaluations; and manages external evaluations. Staffed with professional evaluators and researchers, and embedded in the Regions administrative structure, the Unit strives to improve evaluation quality and use. Its activity is based on the principle that the quality of an evaluation depends just as much on the evaluation question and on the commissioners ability to provide support and to dialogue with the evaluators as on the evaluators skills and prowess. Keen on maintaining its autonomy, the Unit believes that internal evaluations (both the ones directly conducted by internal bodies and the ones out-sourced to external evaluators) have great importance, because they produce specific knowledge and facilitate utilization of evaluation results. Internal evaluators possess knowledge that is rarely acquired by external ones, and are in a unique position to ensure utilization. The paper focuses on the Units experience managing external evaluations i.e., evaluations Regional offices (most often Structural Funds Managing Authorities) contract out to companies or individual consultants. The Evaluation Unit supports regional offices in requesting evaluations. It also ensures technical management of the evaluations, by constituting and coordinating Steering Groups. The paper contrasts the actual experiences of the Unit with the available literature. All external evaluations of the Region are assisted by a Steering Group. Depending on the evaluation and its features, Steering Groups play various roles: ensuring participation of social partners with widely differing standpoints on the policy of interest, or allowing for a forum where to discuss successive refinements of evaluation questions, or ensuring that evaluators can enter in a dialogue about methods with specialized individuals in evaluations which present technically difficult features. The Evaluation Unit shapes the composition of each Steering Group to adapt to the particular task at hand, with a pragmatic approach. A particular challenge is that of ensuring representation of key stakeholders while keeping the group manageable and functional. Depending on the task at hand, Steering Groups include members of other Regional and Central Evaluation Units, the Authority responsible for gender equality, Regions offices which are interested in the interventions to be evaluated, and the major stakeholders. Stakeholders include trade unions, employers associations, and universities. These comprehensive compositions of Steering Groups aim at building a regional network of social actors interested in and competent on evaluation. This, in turn, aims at making possible various forms of utilization of evaluation results. The paper discusses the reasons behind the choices, collects evidence about the functioning of the Steering Groups, and extracts lessons from experience. Keywords: Quality of Evaluations; Managment of Evaluations; External Evaluation; Internal Evaluation Units;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

60

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 084

Measuring Results of Irrigation Projects: Lessons from the Adoption of Rapid Irrigation Project Performance Assessment Framework
R. Baoy 1
1

Pilipinas M&E Society, Quezon City, Philippines

S1-08

This paper discusses some of the knowledge and learning from the adoption of the Rapid Irrigation Project Performance Assessment (RIPPA) framework in an ad-hoc evaluation study of six completed foreign-assisted irrigation projects in the Philippines. Apart from rapid measurement of project performance, the study has proven the effectiveness of the RIPPA framework for assessing outcomes accruing from completed projects whose findings could feed into the periodic and more formalized evaluations conducted by funding agencies. Premised on a results-based evaluation framework, the study demonstrated the importance of stakeholder participation in project evaluation through participatory methodologies commonly used in rapid rural appraisal such as structured problem analysis, focus group discussions, transects, among others.

Wednesday, 3 October, 2012

1 5 : 1 5 1 6 : 4 5

Moreover, the study has shown that less formal evaluation approaches such as RIPPA are useful in analyzing project issues and identifying solutions to improve project performance. In the course of the study, stakeholders gained valuable insights on how to analyze problems in a logical manner using structured problem analysis, suggest solutions to problems using structured objectives analysis and formulate action plans for improving irrigation system performance. Based on findings from RIPPA, the study recommended measures for addressing post-project completion issues and sustaining project benefits over the long-term. Insights on measuring results of irrigation projects using rapid and participatory methods were drawn at the end of the study. The author is a development management specialist with agricultural engineering background and over 15 years of experience in planning, monitoring and evaluation of agriculture and rural development projects in the Philippines and Southeast Asia. Keywords: Results-based evaluation; Rapid performance measurement; Stakeholder participation

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

61

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S4-21 Strand 4

Paper session

Evaluation of local, regional and cross border S4-21 programs I


O 085

The evaluation of cross-border programmes in Romania The valorization of expertise from the academic environment
Wednesday, 3 October, 2012
1 5 : 1 5 1 6 : 4 5
I. Horga 1
1

University of Oradea, Institut for Euroregional Studies, Oradea, Romania

This paper aims to present the evolutionary stage of the evaluation process of the European Territorial Co-operation, and especially that of the cross-border programmes. In our analysis we depart, on one hand from the European expertise in the assessment of cross-border programs and especially those developed through the INTERACT network, associating them with other local enterprises. On the other hand, our analysis takes into account the evaluation of two of the cross-border programs between Romania and EU countries (Romania-Bulgaria Cross-border Co-operation Programme, Hungary-Romania Cross-Border Co-operation Programme). In the Universities of Craiova, Oradea and Timisoara exists the scientific expertise able to be involved in the assessment process of the cross-border programs, in all its phases. Keywords: European Territorial Co-operation; Cross-border programmes; European expertise;

O 086

What kind of evaluation for local development?


F. Pesce 1, B. Dente 2, E. Melloni 2, C. Vasilescu 2
1 2

IRS Istituto per la Ricerca Sociale, Bologna, Italy IRS Istituto per la Ricerca Sociale, Milan, Italy

The paper moves from the main findings of the Study on the contribution of local development in delivering interventions co-financed by the European Regional Development Fund (ERDF) in the period 200006 and 200713 that IRS in collaboration with IGOP carried out for European Commission, DG REGIO. Starting from the clarification of a cluster of concepts linked to the notion of local development, in order to strive for a shared and sharper definition of what is and what is not the local development approach, five regional case studies were carried out aimed at deepening knowledge on the local development approach in place in the analyzed regions, its characteristics, its evolution over time, its results in tackling social, economical and territorial development problems and the main mechanisms that condition the success of LDA in the region. All case studies were based on the description of the socio economic and political context of the region considered and of the main characteristics of the interventions co-financed by the ERDF in the 200006 and 200713 programming period (types of policies promoted, amounts of funds dedicated, type of LDA approaches involved; continuity/changes between the two programming periods and links between the ERDF interventions and other national/European/international funds). Within the study framework, the paper will pinpoint the main contributions coming from using a Local Development Approach, the implications for policy planning at the European, national and regional level and for policy implementation at local level. Particular attention will be devoted to the different evaluation tools, methodologies and approaches that could be used in evaluating the effectiveness of the interventions using LDA and to the effect of LDA on territorial governance and policy and actors integration: i) network analysis of the different typologies of actors involved; ii) analysis of how LDA works? (i.e. which are the contextual, process and policy design characteristics that can be identified as mechanisms able to explain the successes or failures of the approach). Keywords: Local Development; Cohesion policy; Multilevel governance; Policy and actor integration; European Regional Development Fund;

O 121

Using Peer Review to validate and improve evaluation approaches


C. Michaelis 1, B. Leach 2
1 2

Databuild Research and Solutions, Birmingham, United Kingdom WRAP, Banbury, United Kingdom

WRAP (Waste & Resource Action Programme) is responsible for delivering resource efficiency programmes for the governments of the four nations in the UK (England, Scotland, Wales and Northern Ireland). Evaluation is a crucial aspect of WRAPs work, helping the organisation account to funders and show money has been spent wisely. WRAP reports publicly on its impacts at the end of each business planning period, which is typically three to four years. Peer review is a widely used and well established technique, particularly in the publication of scholarly articles. The Oxford English Dictionary defines peer review as evaluation of scientific, academic, or professional work by others working in the same field. WRAP uses peer review of evaluation extensively because it provides a number of benefits including: Adding rigour and credibility to the work of the internal evaluation department Enabling WRAP to validate its methods Helping to comply with best practice and stay up to date with new thinking and approaches Compliance with Government Social Research Network guidance.
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

62

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

Databuild specialises in evaluation of government policies and programmes with a particular focus on the environmental and sustainability fields. Databuild has worked with WRAP for several years and has conducted peer reviews of WRAPs overall evaluation methodology and of specific evaluations. There are a range of issues and challenges in providing and using peer reviews. For example; issues for the commissioner include: The extent to which peer review should be arms length or whether there are benefits from closer working with peer reviewers

S4-21

Identifying an appropriate peer reviewer and how many there should be Enabling the reviewer(s) to obtain sufficient understanding while not incurring excessive time or cost Managing the risk that the peer review(s) may criticise methods and/or conclusions Deciding when to accept or reject recommendations by the peer reviewer(s) Deciding whether or not to publish the review Issues for the reviewer include: Maintaining independence and resisting client pressure for a positive review

Wednesday, 3 October, 2012

1 5 : 1 5 1 6 : 4 5

Striking the appropriate balance between the best solution and an appropriate solution The paper is particularly relevant to the conference theme of ethics, capabilities and professionalism. It will describe the peer review process as undertaken by WRAP and the benefits that have been obtained. It will describe how WRAP has worked with a network of experts to address the challenges and issues above while still ensuring objective and robust reviews. Keywords: Sustainability; Government; Value for money;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

63

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-13 Strand 2

Paper session

Evaluation use
S2-13
O 088

A novel framework for impact evaluation of technology-enhanced collaborative learning as a model to evaluate smart learning labs
A. Sen 1
1

Monash University Sunway Campus, School of Medicine & Health Science, Bandar Sunway, Malaysia

Wednesday, 3 October, 2012

1 5 : 1 5 1 6 : 4 5

Bios: Sen, an anatomist/ophthalmologist by training, is a passionate medical lecturer at Monash University Malaysia. With IT applications training and as chairman, campus ITS committee, he has successfully designed and evaluated innovative technology-enhanced learning pedagogies recognised through PVCs (2008,2009), Deans (2010), Vice-Chancellors (2010) Teaching Excellence and Ron-Harden Medical Education Innovation (2011) awards. Sharing his evaluative educational research through presentations/publications, he is presently engaged in evaluating models of Next Generation learning labs in a Ministry of Higher Education funded project. Background: Such models are used to enhance competencies in professional courses like medicine especially in the Asia pacific and even in Europe where a surge of new medical schools has emerged in the past decade. With increasing emphasis on skills acquisition in large cohorts, there has been a shift from teacher-centred learning to constructivist approaches that emphasise active, collaborative, peer and social learning initiatives. For this creating smart learning spaces with interactive environments through technology enhancements that support collaboration, aid engagement and active learning have been found essential. Rationale: While student and teacher feedbacks have been common educational evaluation strategies, the evaluation of the effectiveness of smart learning spaces has been a challenge because (a) unlike a standardized traditional classroom, smart labs vary in their components and (b) a evaluation framework to analyse such a complex system and its impact on learning is lacking. Objective: This paper aims to provide an evaluative framework for studying knowledge creation and skills acquisition through evaluation of continuum of active learning techniques within a smart learning lab: Low-complexity (Technology-enhanced interaction systems; clickers; self/peer formative assessment); Moderate-complexity (Small group presentations/discussions;peer teaching) and High-complexity (Interaction within guided practical collaborative learning) techniques. Methods: Rooted in mixed methodologies, the evaluation framework is derived through a thematically structured case study approach by using triangulation of evidence from research data collected via a variety of ethnographic research instruments suited for different Active learning techniques within a smart learning lab: (a) feedback from critical colleagues (b) video recording and analyzing collaborative learning interactions in between peer students, resources, technology and facilitator through a novel interaction graph(designed by the author) for visual representation of effective interactions mapped with learning domains (c) semi-structured interviews with course-enrolled students and graduated students for short term and long term impacts (respectively) on their learning and (d) pre- and post-collaborative learning session formative assessment through clickers. Results: The evaluative framework that emerged, serves to understand the impact of a collaborative pedagogy-technology-learning space system on co-construction of knowledge, student interactions and on learning domains. Implications: Thus the proposed impact evaluation framework includes subjective components of student/ teacher evaluation, instruments from all levels of active learning techniques and finally the objective interaction analyses of collaborative learning thus making it a comprehensive yet flexible evaluation tool. With the globalization of collaborative learning technologies, this framework, though developed in South East Asia, could be cross culturally applicable in Europe as an impact evaluative tool for active learning. Keywords: Impact Evaluation; Evaluation framework; Collaborative learning; Smart learning labs; Technology-Enhanced Active Learning;

O 089

New ways to present evaluation findings: multimedia, scorecards and interaction


G. ONeil 1
1

Owl RE, Commungny, Switzerland

Background: The presentation of evaluation findings has long suffered from a lack of innovation. The tedious PowerPoint and the dense Word report have had a direct impact on evaluation findings not being read or acted upon. However, evaluators are starting to use new and innovative ways of illustrating their results, changing also the nature of the delivery and dissemination of evaluation findings.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

64

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

Objective: This presentation will provide an overview and practical examples of the new and innovative ways for presenting evaluation findings: Scorecards, summary sheets, multimedia and video reports, blogs, interactive web pages amongst others. The presentation will also discuss how these new methods impact on the dissemination and use of evaluation findings. Conclusion: participants will learn of the new and innovative ways of presenting evaluation findings that they will be able to apply in their work. Keywords: Evaluation usage; Evaluation presentation; Multimedia; Evaluation reporting; Evaluation findings;

S2-13

O 090

Wednesday, 3 October, 2012

Real-time Evaluations: Contributing to system-wide learning and accountability


1 5 : 1 5 1 6 : 4 5
R. Polastro 1
1

Fundacion DARA Internacional, Madrid, Spain

Riccardo Polastro is Head of Evaluation at DARA. He has 19 years of experience in humanitarian affairs and development aid having worked in over than sixty countries. He has carried out single evaluations funded by Danida, DFID, the DG ECHO, EC, IASC, ICRC, Norad, OCHA, UNHCR, UNICEF, UNDP, SDC, SIDA and other organizations. Over the last 20 years or so the humanitarian community has introduced a number of initiatives to improve accountability, quality and performance. Codes of conduct, standards, principles, monitoring frameworks and Real Time Evaluations (RTEs) have all been rolled out, and a new humanitarian evaluation architecture has emerged, in which RTEs are becoming a central pillar. The session will present Real-Time Evaluations and their key role in humanitarian aid. RTEs, are participatory evaluations that is intended to provide immediate feedback during fieldwork, is described as a specific tool in disaster management which offers the possibility of contributing to improved learning and accountability within the humanitarian system, bridging the gap between conventional monitoring and evaluation, influencing policy and operational decision making in a timely fashion, and identifying and proposing solutions to operational and organisational problems in the midst of major humanitarian responses. The session will highlight key conditions for the success of RTEs Link to the article: http://www.odihpn.org/humanitarian-exchange-magazine/issue-52/real-time-evaluations-contributing-to-system-wide-learning-and-accountability Keywords: Accountability; Learning; Real-Time Evaluation; System wide evaluations;

O 315

The Agreement at Completion Point: Joint Management and Government Agreement to Adopt and Implementation Evaluations Recommendations within Specified Timeframes
F. Felloni 1, L. Lavizzari 1, A. Muthoo 1
1

IFAD, Independent Office of Evaluation, Roma, Italy

Each evaluation done by IFADs Independent Office of Evaluation (IOE) is concluded with an Agreement at Compleiton Point (ACP) between the IFAD Management and concerned Government. The Agreement at Completion Point is a short document, summarising the main evaluation findings and recommendations, which IFAD Management and Government agree to adopt and implement within specific timeframes. The ACP is signed by the IFAD Management (by the Associate Vice President) and Government (usually by the concerned Minister). IOEs role is to facilitate the ACP process, but it is not a signatory of the ACP. The agreed recommendations in the ACP are carefully tracked and their implementation reported to the Executive Board by the President of IFAD annually in a dedicated report known as PRISMA, Presidents Report on the Implementation Status and Management Actions on evaluation recommendaitons. IOE reviews the PRISMA and provides its written comments to the Board at the same time when the Board considers the PRISMA. A couple of years ago SADEV (Sweden) did a comparative study of the management response system is selected multilateral and bi-lateral aid agencies, and concluded that IFADs ACP process and instrument was a very good practice, among other reasons, as it ensures commitment from both Management and Government to act upon evaluation recommendations. It participatory process builds ownership among the main partners who have to implement evaluation recommendations. The proposal of IOE for the EES is to present the ACP process and instrument, and to share good practice examples, including how recommendations that IFAD and/or Government do not agree with are treated. The important role of the Board in ensuring recommendations are implemented will also be underlined with specific case studies and examples. The presentation of the ACP process and instrument will be made by the Director of IOE, which will be followed by a Q&A session. If possible, we would try to bring one representative from the IFAD management and one from a concerned Govt, to share their views on a recent ACP concluded. Keywords: Evaluation Recommendations; Building Ownership for Implementing Evaluation Recommendations;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

65

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S3-24 Strand 3

Panel

Building the capacity of beneficiary countries S3-24 in monitoring and evaluation. Contrasting methods and experience
O 091

Wednesday, 3 October, 2012

1 5 : 1 5 1 6 : 4 5

Building the capacity of beneficiary countries in monitoring and evaluation. contrasting methods and experience
G. Holroyd 1, D. Rider Smith 2, E. Gueye 1
1 2

European Commission, DG Enlargement, Brussels, Belgium Prime Ministers Office, Kampala, Uganda

Rationale: This panel will focus on capacity building of monitoring and evaluation systems in beneficiary countries. One of the agreements from Busan, was for developing countries and development co-operation providers to explore together additional initiatives aimed at improving the delivery, measurement, learning and accountability for results. This panel will examine different approaches to achieve this aim, with contrasting experiences presented by the different panellists. Chair: George Holroyd, Seconded National Expert on Evaluation, since 2010 Contributors: (1) George Holroyd, Seconded National Expert on Evaluation, since 2010 Evaluation Capacity Building, Western Balkans and Turkey. George will describe an initiative taken by the European Commission and the World Bank in the Western Balkans and Turkey. An EC funded, and World Bank implemented, project is aiming to contribute to the following higher-level goals: Development of sustainable institutional capacity for monitoring in selected sectors; Beneficiary ownership of M&E systems through high-level demand for evidence-based policy making: Establishment of indicators that are useful for public officials in making decisions on policy/ program design and resource allocation. The main modalities to achieve these aims will be (i) intensive, highly focused hands-on trainings in each country to support indicator development and data-gathering mechanisms in selected sectors, and (ii) the establishment of sectoral communities of practice to promote peer-to-peer learning, exchange of experience, and informal networks among officials from the beneficiary countries. The Enlargement region is significantly different from the developing country context, so the stress of the project is as much about peer learning as it is on the proposed training. (2) David Rider Smith Working from the demand-side: evaluation in Ugandas public service The strengthening of monitoring and evaluation systems, and commensurate capacities, has come from a strong push to improve the use of evidence in decision-making in Uganda. The Government of Uganda, led by the Office of the Prime Minister, has come at this firmly from the demand-side, through establishing bi-annual Cabinet Retreats (President, all Ministers, all Permanent Secretaries) to discuss published Government Performance Reports (see opm.go.ug). Over two days, data and information of performance, spending and explanatory factors from all Government sectors is presented, defended, and discussed. The policy ideas are then refined, reviewed, costed and approved by Cabinet for the next financial year. The impact is beginning to be felt, as Ministers are increasingly unwilling to have to defend findings against weak indicators, lack of data, unavailable explanatory information. While this pressure on supply sits more on monitoring side, it has also provided an opportunity for the introduction of evaluative information in the debate. In 2011, the Office of the Prime Minister established the Government Evaluation Facility to initiate the conduct of public policy and public investment evaluation to strengthen the evidence base for decision making. As of 2012, five major policy evaluations are ongoing, including quasi-experimental impact evaluations, performance and process evaluations. The institutional framework for policy debate on evidence has been constructed, and now the chance is there to strengthen the use of this evidence. (3) Dr. Elhadji Gueye Supply and demand for evaluation capacity development in Francophone Africa Keywords: Capacity Building; Monitoring and Evaluation; Different methods;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

66

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-40 Strand 2

Panel

Tools and methods for evaluating the efficiency S2-40 of development interventions
O 092

Tools and methods for evaluating the efficiency of development interventions


Wednesday, 3 October, 2012
1 5 : 1 5 1 6 : 4 5
M. Palenberg 1, Michaela Zintl 2
1 2

Institute for Development Strategy, Mnchen, Germany Federal Ministry for Economic Cooperation and Development, Germany

Presenter: Dr. Markus A. Palenberg. Markus heads the Institute for Development Strategy, an independent research institute located in Munich, Germany (www.devstrat.org). Markus specialized in global program evaluation, evaluation methodology research, and strategy advice for development programs. Rejoinder: Michaela Zintl. Michaela is head of the evaluation division, Federal Ministry for Economic Cooperation and Development, Germany. Session Summary: We present the principal results of a two-year research effort funded by the German Federal Ministry for Economic Cooperation and Development (BMZ) on tools and methods for assessing the efficiency of development interventions. The session is divided into four parts: 1. The motivation for our research is described by documenting the gap between what is expected and what is delivered in terms of efficiency analysis. 2. Existing understanding, definitions and misconceptions for the term efficiency are presented and put into context, ranging from simple transformation rates to more elaborate concepts in welfare economics and utility theory. 3. An overview of different methods for assessing efficiency is provided, highlighting their analytic power, their applicability, and their analysis requirements in terms of data, resources and skills. 4. From the above, four general recommendations for how to close the gap between expectation and delivery of efficiency analysis are derived. Relevance Statement: In evaluations and appraisals of international development projects, programs and more aggregate interventions, the efficiency criterion is of high importance. It lies at the heart of welfare-optimizing allocation of resources and allows benchmarking and improvement of components or entire interventions. This importance of efficiency as evaluation criterion is reflected in national budget codes that mandate efficiency analysis for public expenditures, in the inclusion of efficiency as key criterion in international evaluation standards, and in the frequency with which efficiency-related questions appear in terms of references of appraisals and evaluations. With the ever increasing availability of evaluative information in the networked society, this criterion is likely to increase its prominence further. In spite of its importance, the efficiency criterion is probably assessed less frequently and with less quality than any other evaluation criterion as demonstrated in several studies we have reviewed and in our own review of evaluation reports. This mismatch between expectation and delivery is of concern to actors along the entire aid value chain: from policymakers to ultimate beneficiaries. It is also of concern to aid evaluators that may be incentivized to overstretch themselves at the expense of professional rigor. The research study that is summarized in this session provides assistance in closing this gap by making available a catalog of methods for conducting efficiency analysis and by providing the theoretical basis underlying existing concepts of efficiency. While focusing on international development, most findings of this study are likely to be transferable, at least to some degree, to evaluation of efficiency in other areas of the social sciences. Keywords: Efficiency;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

67

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S5-19 Strand 5

Panel

Evaluation Power, Power of Evaluation and Speaking S5-19 Truth to Power


O 096

Valuing evaluation power, the power of evaluation and its influence Speaking Truth to Power
Wednesday, 3 October, 2012
1 5 : 1 5 1 6 : 4 5
S. Premakanthan 1
1

Symbiotic International Consulting Servi, Ottawa, Canada

Evaluators all over the world have heard the slogan: speak truth to power. It was a topic of discussion at the 6th African Evaluation Conference (AfrEA), January 2012 in Accra, Ghana and at many other fora. A phrase coined by the Quakers during the mid-1950s. It was a call for the United States to stand firm against fascism and other forms of totalitarianism; it is a phrase that seems to unnerve political right, with reason. The founders of United States risked their lives in order to speak truth to power, that of King George. It was and is considered courageous, although is more commonly scorned today. What does this slogan mean to evaluators, the profession and practice? There are many definitions of the term power. The paper defines evaluation power in the context of these definitions and identifies some of the sources of evaluation power. For example, power is defined as a person, group, or nation having influence or control over others or those who hold effective power in a system or situation: a plan vetoed by the powers that be or the ability or official capacity to exercise control; authority. How do we value the magnitude of the sources of institutionalized evaluation power vested in i) governments through legislations, authority instruments, and policies, ii) philanthropic foundations iii) Financial Institutions iv) Government Aid Agencies, and v) numerous social networks: evaluation societies and associations around the world including international networks. The paper examines the linkages of the value of evaluation power and how it facilitates the creation of the power of evaluation, defined as the wealth of performance results/evaluation evidence to influence societal change, speaking truth to power. My research indicates that the supply and demand equation for evaluation products and services has not been explored in a quantitative (value of the worth) approach. This is an exploratory theoretical model. The model attempts to value the supply and demand equation of evaluation (evidence) for informed decisions by taking an example of evaluation power derived from legislation and policy instruments of the government of Canada. The resultant evaluation power could be quantified by the on-going and annual investment to create the evaluation infrastructure and capacity in central agencies and government departments and agencies to produce the evaluation evidence, the power of evaluation. Similarly, we could quantify the worth or value of the use of evaluation evidence in program expenditure management, for example, savings from continuous improvement recommendations, non renewal of programs and reallocation of resources and reinvestments based on program, project lifecycle management. My final thought, does the evaluation community need evaluation power brokers or champions to sheppard the truth to power? Keywords: Evaluation Power; Power of Evaluation; Spaeaking Truth to Power;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

68

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-35 Strand 2

Panel

Innovative Approaches to Impact Evaluation: S2-35 Session 1


O 098

Innovative Approaches to Impact Evaluation: Session 1


E. Stern, J. Mayne, K. Forss, B. Befani, N. Stame, R. Davies

Wednesday, 3 October, 2012

1 7 : 0 0 1 8 : 3 0

Rationale: For the last year an international team of leading evaluation researchers and practitioners has been working together on a study commissioned by the UKs Department for international Development (DFID) with the aim of broadening the range of Impact evaluation designs and methods. Impact evaluation has been vigorously debated in the evaluation community recently, with advocates of experimental methods often arguing that only their approaches are rigorous and robust. DFID wanted to identify and assess ways of evaluating impact that could be applied to its more complex programmes where it had found that experimental methods and RCTs were not suitable. They were particularly interested in designs that were qualitative, not statistical and theory based that could be demonstrated to be high quality. The study reviewed actual evaluations, established and emergent methods and analysed the attributes of programmes drawing on complexity and organisational theory. The team was supported by advisors drawn from practicing evaluators, social science methodologists and philosophers of science. This will be the first dissemination of a major study that addresses important issues for the evaluation community. It fits well within the conference strand on Evaluation research, methods and practice. Although the study was commissioned to support international development evaluations it took a cross domain perspective and these sessions will be of relevance to all those interested in innovative designs to evaluate the impacts of policies and programmes. Proposers (all members of the study team): Elliot Stern is an evaluation practitioner and researcher based in UK. He edits the journal Evaluation; is visiting Professor at Bristol University and Professor Emeritus at Lancaster University; and is a past President of the EES. He was the team leader for this study. John Mayne practices as an evaluator in Canada. He was previously at the Canadian Treasury Board and the Office of the Comptroller General. He has been developing approaches to Contribution Analysis for many years and is also an expert in Results Based Management. Kim Forss works as an independent evaluation consultant based in Sweden and has co-edited the recently published book Evaluating the Complex. He has been President of the Swedish Evaluation Society and is a Board Member of the European Evaluation Society. Barbara Befani is an evaluation methodologist and consultant with a particular interest in frontier methods and designs in evaluation including mathematical approaches to small-n situations. She has been a methodological advisor to Italian and EU public programmes. Nicoletta Stame is Professor at the University of Roma Sapienza, is past President of the European Evaluation Society and of the Italian Evaluation Association. She has written about evaluation methodology with particular reference to theory based and impact evaluation approaches. Rick Davies is an independent Monitoring and Evaluation Consultant based in Cambridge, UK. His clients are international development aid organisations (multilaterals, bilaterals and INGOs). He has been managing the Monitoring and Evaluation NEWS website since 1997. Abstract: Reframing Impact Evaluation (introduced by Elliot Stern): This session outlines different ways of thinking about Impact Evaluation (IE). We suggest that linking cause and effect and explaining how and why effects occur is at the heart of IE. However there are various approaches to causal inference so how can we know which to choose in any particular setting? This requires reconciling: the evaluation questions we want answers to; the capacities of different methods and designs all have their strengths and weaknesses; and the kinds of programmes we are trying to evaluate. Contribution analysis and the causal package (introduced by John Mayne): Interventions or programmes rarely lead to results on their own. They combine with other actions, circumstances and contextual factors into causal packages. The logic is similar to mechanism based theories of change and to how policy analysts think about policy coherence. IE needs to identify the contribution that an intervention makes in any particular package: how far they are necessary and sufficient for an intervention to achieve its objectives. What designs and methods can and cant do (introduced by Barbara Befani): Different designs and methods approach causal inference from different perspectives. Some for example look for the causes of the effect; whilst others look for the effects of a cause! Different approaches will be illustrated by comparing counterfactual (comparative) designs such as experiments; and Case-based designs such as Qualitative Comparative Analysis and Agent Based Modelling. http://mande.co.uk/

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

69

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S4-01 Strand 4

Paper session

Evaluation and employment


S4-01
O 099

Effective and sustainable reintegration of workers after large-scale redundancies: the evidence from the European Globalisation Adjustment Fund
Wednesday, 3 October, 2012
I. Pavlovaite 1, T. Weber 1
1

1 7 : 0 0 1 8 : 3 0

GHK, Birmingham, United Kingdom

The scale of industrial and technological change and the evolution of the patterns of global trade have had a significant impact on European labour markets. As the pace of change has accelerated, it has become necessary for workers to become more adaptable to take on different roles either within their organisation or in the wider labour market. One of the most visible expressions of these processes have been large-scale mass redundancies occurring across various sectors and occupations in most European countries. The European Globalisation Adjustment Fund was established in 2007 with the express intention to mitigate negative consequences of large-scale redundancies resulting from the impact of globalisation and helping the redundant workers to re-integrate into the labour market and find new jobs. The Fund is intended to finance active support measures for redundant workers, such as re-training, job search assistance, or entrepreneurship promotion. The mid-term evaluation of the Funds activities was undertaken in 2011, covering the first 15 cases of EGF assistance between January 2007 and June 2009. The evaluation particularly focussed on establishing the quantitative and qualitative outcomes for individuals of participating in these measures, as well as identifying the key explanatory factors behind success and challenges faced in the reintegration of workers after the mass redundancies and large-scale shocks to the local, regional and even national economy. It also sought to benchmark the results achieved against the outcomes of comparable measures in other cases of large scale redundancies. The paper discusses the key factors facilitating or hindering the effective and sustainable reintegration of workers after large-scale redundancies by considering in turn: Key evidence in relation to the supply side factors, such as the socio-economic profile of the assisted workers, Key evidence in relation to the demand side factors, such as the economic trends and tendencies at the locality, Key evidence regarding the successful mix of measures to support redundant workers, Key evidence in relation to the mix of the EGF and nationally funded measures provided. The paper then draws analytical conclusions as to relative importance of the factors involved and concludes by identifying key policy conclusions for further action. Furthermore, it discusses the challenges of obtaining reliable beneficiary data and establishing comparable benchmarks for active labour market policy measures in such crisis situations. Keywords: Redundancy; Globalisation; Effectiveness; Labour market reintegration;

O 101

Evaluation in youth employment in MENA region and the Arab spring, experience of a Moroccan NGO
A. Bakkali 1
1

Education For Employemnt (EFE) Morocco, M&E, Casablanca, Morocco

Presenters Bio: Amine Bakkali has had extensive experience in aspects of project management and M&E of development programs. He participated in several Evaluation clinics and workshops in Switzerland, Qatar and Ghana. Hes member of the African Evaluation Association. Prior to joining EFE, he worked as a program manager of Entrepreneurship projects in a micro finance institution in Morocco. Mr. Amine Bakkali received his Master of Governance in Human Development from Hassan II University. Objectives sought: 1. To increase awareness about the importance of Youth Employment Evaluation in the context of Arab Spring 2. To share lessons learned and experience in Youth Employment Evaluation of EFE Morocco, as an international NGO operating in the MENA region. Brief narrative: To date, there have been revolutions in Tunisia and Egypt; a civil war in Libya resulting in the fall of its government; civil uprisings in Bahrain, Syria, and Yemen, the latter resulting in the resignation of the Yemeni prime minister; major protests in Algeria, Iraq, Jordan, Kuwait, Morocco, and Oman; and minor protests in Lebanon, Mauritania, Saudi Arabia and Sudan. Numerous factors have led to the protests, including issues such as dictatorship or absolute monarchy, human rights violations, government corruption economic decline, extreme poverty and unemployment with a large percentage of educated and unemployed youth within the population. In Morocco, More than 40 % of young Moroccans between ages 1534 are unemployed. Most alarmingly, 75.3 % of university educated youth living in urban areas have been unable to secure jobs.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

70

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

EFE-Maroc established a model for youth employment and built on a strong M&E system developed in Morocco to address the countrys youth unemployment crisis. The M&E process is based on a mix of qualitative and quantitative methods including survey tools, key informant interviews and when possible, focus group discussions. Survey tools allow for easy comparisons of indicators over time and across sites. Short answer and open-ended questions provide information that can lead to a better understanding of the motivations, behaviors, and perspectives of partners, employers, and youth. This information, captured by the tools as feedback from key stakeholders, is used to improve and/or adjust programs.

S4-01

Recently, EFE-Maroc was competitively selected to receive technical assistance on monitoring and evaluation and impact evaluation. The technical assistance is delivered by the Youth Employment Network through its network of M&E experts and specialists. The technical assistance includes expert consultancies, training, data collection assistance and data collection instruments. Justification: Discussing Youth Employment issues in the MENA region in the context of Arab Spring and how the Evaluation could play a role in insuring the effectiveness of programs that are addressing these issues would be a relevant topic to share and discuss with evaluation community. Morocco could be a perfect case study that will advance the public interest through the promotion of cross-cultural exchanges.

Wednesday, 3 October, 2012

1 7 : 0 0 1 8 : 3 0

Royame du Maroc Haut Commissariat au plan Activite, Emploi et Chomage 2009 p.42 Keywords: Arab spring; Youth Employment; Impact Evaluation; MENA region;

O 094

Employment effects of Cohesion programmes in Hungary a policy simulation approach


T. Tetenyi 1, G. Balas 2, F. Bognar 2, B. Herczeg 2, K. Major 3
1 2

Tetenyi kft., Budapest, Hungary Hetfa Institute, Budapest, Hungary 3 ELTE University, ELTEcon, Budapest, Hungary

Employment effects of structural interventions have been extensively studied and reported in and outside Europe, Hungary included, for decades. The Hungarian Development Agency commissioned a concise evaluation to bring together available evaluation evidences and give a forecast on the various employment effects of all the structural interventions co-financed or to be co-financed by European Cohesion and Structural Funds under the current (200713) programming period. The ex post ex ante forecast was to be based on projects actually committed till end-2011 and the implementation schedule until 2015. Immediate effects (mostly comprising of demand-side effect of fixed investments financed from the Funds and lock-in effects) and short-term effects (both on the demand and the supply side of the job markets) were summed up in an annual forecast as a difference between the situation with and without the interventions. Longer-term effects, were rather integrated over an intervention-relevant horizon to give an aggregate forecast of gains in social capital, based on differences in life earnings. To calculate intermediate and short-term total effects, a simple general equilibrium macro-economic model has been built and used to map implementation data into an annual forecast. Some parameters were estimated or calibrated using a macro data; others, mostly those describing impacts of structural interventions, were imported from the Lisbon Assessment Framework and the relevant literature. Special econometric techniques were applied to deal with economic recession. Most available evaluation evidences, especially those produced in recent years, concern direct effects, obtained using regression and counterfactual techniques. To establish a link between direct and indirect effects, local job markets were modelled to give a forecast of local employment gains. Thus, total effects are regarded as resultant of direct, local (deadweight effects, replacement effects, synergies and spatial spillovers) and global (multiplier and cyclical) effects. In estimating local job markets behaviour, a Durbin model with two-dimensional fixed effects was fit over a twenty-years panel of local employment data to correct for spatial auto-correlation. The identified model was complemented with parameters accounting for direct effects and yielded a split of direct and local effects on the local markets that could be aggregated over the localities. Using relevant evaluation evidences for policy forecast needed, however, a systematic approach to collecting, sorting and adapting information from the literature. An extensive literature review yielded a vast body of benchmark parameters and meta-information. Internal validity of the evidences was regarded as a responsibility of the authors, while external validity had to be explicitly assessed. Information regarding the interventions and the (direct or indirect) beneficiaries (reach) were compared to those in the implementation schedule and the project database (with propensity matching and IV techniques, Heckmans insight on marginal treatment effects was applied). Complementary information was requested from the authors (with a limited success). Information regarding the context was assessed by experts. Main result of the evaluation was a policy forecast model that can be applied for both policy analysis and policy design (for the next programming period). Credibility and robustness of model results were tested in expert panel. Bals Gbor , Bognr Fruzsina , Herczeg Blint , Major Klra , Ttnyi Tams Bals Gbor Economist, managing director of HTFA Center for Analyses. As a former director at the Hungarian National Development Agency, he was responsible for the evaluation of EU cohesion policy interventions in Hungary. His main fields of expertise are cohesion policy, fiscal federalism, efficiency of governmental policies. Bognr Fruzsina Economist, junior analyst of HTFA Center for Analyses she worked in the research studying the pressure groups of enterprises as a project leader. Her main interests are institutional economics and industrial organizations. Herczeg Blint Economist, junior analyst at HTFA Center for Analyses. currently a Ph.D. student at the University of Debrecen, and the topic of his dissertation is the changes in the channels of monetary transmission. His main research interests lie in monetary economics, effects of monetary policy and the channels of monetary policy transmission. Major Klra Ph.D. in Economics, assistant professor at Etvs Lornd University. She is delivering lectures in basic macroeconomics, advanced macroeconomics, international finance, growth theory and application of numerical methods in the field of macroeconomics using Matlab. Her research interest is mainly related to the application of numerical methods in macroeconomics and growth theory.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

71

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

Ttnyi Tams Ph.D. in Economics, lead evaluator. A former banker and researcher, as a former Deputy Secretary-of-State dealt with macroeconomic policy and convertibility in the mid-90s. He has been dealing with structural and regional policies for fifteen years in various positions. His main fields of expertise are cohesion policy, evaluation and planning. Keywords: Employment effects; General equilibrium model; Local job markets; Policy simulation; External validity;

S4-01

O 095

Introduction of the regulatory impact assessment in public administration of countries in transition between bureaucratic control and neo-pluralism
D. Tsygankov 1
1

Wednesday, 3 October, 2012

Higher School of Economics, Center for Regulatory Impact Assessment, Moscow, Russia

1 7 : 0 0 1 8 : 3 0

Government regulation inevitably imposes costs on all stakeholders the state (government officials who apply the rules and/or monitor their implementation) and those directly affected (businesses, citizens, public sector institutions, non-profit organizations) Attitude towards government regulation has changed dramatically over the last decade. A detailed assessment, scrutiny of alternative regulatory measures and the final choice of a balanced alternative but not deregulation is becoming a trend of modern public policy and its core regulatory impact assessment (RIA). Effective RIA also presupposes other tools to improve the legislative drafting quality and calculation methods (such as public consultations, business web-panels, plain legal writing, sunset legislation, law termination, evaluation clauses, standard cost model) and the recognition of open government principles (such as freedom of information, open government data, open public spending). Next, upgrade of the better regulation concept to the ideology of smart regulation means a decisive step towards the so-called open government. For example, according to the memorandum of the European Commission (October 2010) smart regulation consists of three key components (a) a comprehensive system of regulatory impact assessment at all stages of the policy cycle, from the design of a piece of legislation, to ex-post evaluation and simplification of existing legislation, (b) co-operation institutions of the executive, legislative and regulatory bodies of the EU member states, and (c) participation mechanisms for stakeholders and citizens, including the use of Web 2.0. applications (Internet information portal Your Voice for Europe). This revolution in the minds gradually forms a network cooperative system of decision-making which reduces costs for its members and enhances mutual trust between government, business and civil society. Countries in transition face a number of known limitations while attempting to implement the RIA mechanisms in its public administration. Therefore, open government for them is a long-term aspiration rather than an immediate reality. Governments with the expert support are trying to formulate road maps of implementation to meet national peculiarities and not to destroy the existing system of decision-making. Obviously, the bureaucratic control in the short run promises more rapid progress towards the goal legislation is amended as a result of RIA, RIA units in ministries are being created, money is allocated for the civil servants retraining. On the other hand, partially corrupt and inefficient rank-and-file bureaucracy of these countries is not interested in changing the rules of the game and openness, and therefore gradually reduces open government procedures to interdepartmental coordination (even if it is electronic). In this case regulatory impact assessments are reduced to formal papers (so called box-ticking effect). In the long run, slower but more effective way is to build a pluralistic, decentralized network of social and bureaucratic forces, supported by the political will of central government and civic initiatives which are based on up-to-date expert, network and information technologies. Keywords: Regulatory impact assessment; Open government; Collaborative governance; Neo-pluralism; Countries in transition;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

72

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S4-10 Strand 4

Paper session

Evaluability of public policies


S4-10
O 102

Reducing household food waste in the UK: Evaluation of a complex policy intervention
B. Leach 1, T. Quested 1, C. Michaelis 2, A. Parry 3
1

Wednesday, 3 October, 2012

1 7 : 0 0 1 8 : 3 0

Waste & Resources Action Programme (WRAP), Research and Evaluation, Banbury, United Kingdom Databuild, Birmingham, United Kingdom 3 Waste & Resources Action Programme (WRAP), Consumer Food Waste Prevention Programme, Banbury, United Kingdom
2

Dr Barbara Leach, Head of Research and Evaluation at WRAP. Barbara has more than 20 years experience in research and evaluation in the waste and resources field working both for government and in the private sector. The amount of food wasted across Europe is a concern both to national governments and EU institutions. The Commission Communication Roadmap to a Resource Efficient Europe (European Commission 2011) lists food waste as a priority area. More recently the European Parliament adopted a resolution calling for action to halve food waste by 2025, asking the Commission to implement a coordinated strategy of EU-wide and national measures as a matter of urgency. WRAP (Waste & Resources Action Programme) is the UKs main policy delivery body on waste and resources. One of its programmes is to reduce household food waste by working with retailers, the retail supply chain, local authorities and community organisations. The work is both technical, e.g. working with designers on new packaging formats that prolong food life, and behavioural, e.g. working with retailers on campaigns and in-store publicity to influence their customers behaviour. As a publicly-funded body, WRAP must provide evidence on what has been achieved. Reducing household food waste is a complex undertaking for government because the production of food waste is the result of a complex interplay of attitudes and behaviours set within the context of different local food waste collection schemes, a highly politicised policy context alongside strong economic drivers in terms of the rising price of food. Positioned within the context of UK Government guidance on evaluation (HM Treasury 2011), this paper will set out how the design of an evaluation of WRAPs innovative work was influenced by the complex networks that exist within the policy arena. It will explain how WRAP developed a theory-based evaluation framework and how an impact assessment was carried out. This involved a series of challenging methodological developments including: Establishing a theory of change; Deriving quantitative evidence from secondary sources; Developing a qualitative approach to assessing attribution using contribution analysis (Mayne 1999); and Using peer review effectively. The paper will set out how using a highly innovative mix of methods including econometric modelling, operational research techniques as well as standard social science approaches, has contributed to the work of WRAPs food waste reduction programme and how it might contribute to the evaluation of it in the future. The paper will be of interest to evaluators working in similar fields who face methodological challenges due to the networked nature of society. These considerations will be structured according to the conference themes of new concepts, new challenges and new solutions. It fits within conference theme 2 or 5. References: European Commission (2011) Roadmap to a Resource Efficient Europe COM(2011) 571 final Mayne, J. (1999) Addressing Attribution Through Contribution Analysis: Using Performance Measures Sensibly http://publications.gc.ca/collections/Collection/FA3-31-1999E.pdf HM Treasury (2011) The Magenta Book: Guidance on Evaluation London: HM Treasury Keywords: Evaluation methods; Impact evaluation; Theory of change; Multi-disciplinary; Contribution analysis;

O 103

The evaluation of National Rural Network Programmes: methodological challenges and suggested solutions
A. Sanopoulos 1, J. Tvrdonova 1
1

European Evaluation Network for Rural Development, Helpdesk, Brussels, Belgium

Background: Within and beyond the context of European evaluation-architecture the assessment of networks provides specific methodological challenges: What is the added value of a network? How to measure its progress towards the objectives? How to assess the impacts? This paper focuses on the challenges linked ot the evaluation of National Rural Network Programmes in the programming period 20072013: According the European Council regulation 1698/2005 and its articles 66 and 68 each EU Member State is obliged to establish a national rural network of organisations and administrations involved in rural development in order to prepare and conduct various activities to support the implementation of rural development programmes. Italy, Germany, Spain and Portugal took the option of supporting the
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

73

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

establishment and operation of their national rural networks with a dedicated programme. Similarly to other programmes supported by European Agriculture Fund for Rural Development (EAFRD), these so called National Rural Network Programmes have to be subject to a full scale evaluation process. Objectives of this paper are (a) to highlight the role, the added value and possible impacts of national rural network programmes as an instrument to foster rural policy interventions and to enhance the governance in rural areas of multi-regional EU Member States, (b) to discuss the possible approaches to the assessment of National Rural Network Programme impacts The paper will specifically focus on the evaluation approach of three National Rural Network Programmes Spanish, Italian and German, which differ among themselves in terms of their budget, implementation structures and intervention logic design. Two types of rural network programme impacts will be further considered: 1. those directly related to EU rural development objectives (improving competitiveness of agriculture and forestry, improving environment and countryside and improving the quality of life and encouraging diversification in rural areas);

S4-10

Wednesday, 3 October, 2012

2. those related to the strengthening of networks among rural actors independently from their relation to the rural development programmes and enhancement of the governance in rural areas as such.

1 7 : 0 0 1 8 : 3 0

Keywords: Rural network programmes; Intervention logic; Impact; evaluation;

O 104

Assessment of the use of financial resources of the health system in Colombia to design a preventive policy
R. Penaloza 1, A. M. Rios 1, M. Garcia 1
1

Pontificia Universidad Javeriana, Distrito Capital, Bogot, Columbia

Penaloza, Enrique, Investigator of the Center Projects of Development Javeriana University, Bogot-Colombia. epenaloz@javeriana.edu.co. Javeriana University researcher, is leading the research group policy and health economics. His work has focused on assessments of the public policy of the Colombian health system. Experience design of public policy evaluation of hospital restructuring and financial resources of the system. Paper sesin The Social Security System in Health in Colombia was ranked first in financial equity in 2000. This honorable place was achieved thanks to the solidarity which has its fiscal architecture, because the system transfers wealth of resources and contribution of people of better income to the most vulnerable. However, nowadays the system is experiencing its biggest financial crisis in its history, because, according to the National Government, resources are insufficient for handling. This has led to the development of different actions such as the formulation of Acts 1122 of 2007 and 1438 of 2011, which focused on proposals aimed at finding new financial resources for the sector. Another hypothesis, following the current crisis in the sector, suggests that there are sufficient resources and, therefore, present circumstances are attributed to inadequate financial handling of the resources, rather than a shortage of them. This is evidenced by the facts of corruption publicly known, which are the result of the inefficient system of inspection, monitoring and control. This highlights the need for preventive control tools, in order to avoid the misuse of financial resources in the health sector. The main objective of the research was to analyze the flow and use of the financial resources of the Social Security System in Health, identifying the difficulties in handling them, in order to build a preventive policy that would prevent its misuse and thus ensure access of the Colombian population to the health services. Thus, it seeks to achieve a condition of proper management of financial resources by strengthening the identification, analysis and evaluation of the processes that oversee the management of those risks. To achieve this goal we designed a system for tracking and monitoring through the construction of a risk map, which aims to develop a tool to act preventively against the possible misuse of resources. Starting from this map, we propose a preventive policy to provide a structure for monitoring and control, with an action plan setting out measures to be developed to prevent the risks in the misuse of financial resources of the system, and a methodology for following, monitoring and evaluation of the preventive measurements to control the actions proposed by the policy. Keywords: Financial resources for health; Colombian health system; Policy design based on evaluations; Evaluation of the use of financial resources in health;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

74

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-46 Strand 2

Panel

Agency And Evaluative Culture: Contributions S2-46 Of Feminist Evaluation


O 030

Agency And Evaluative Culture: Contributions Of Feminist Evaluation


V. Mukherjee 6, S. Sharma 1, R. Sudarshan 1, R. Khanna 2, A. Pradhan 2, N. Sardeshpande 3, E. Mendez 4, Y. Atmavilas 5

Wednesday, 3 October, 2012

1 7 : 0 0 1 8 : 3 0

Institute of Social Studies Trust, New Delhi, India SAHAJ- Society for Health Alternatives, Vadodara, India 3 Tata Institute of Social Sciences, Mumbai, India 4 International Development Research Centre, Evaluation Unit, New Delhi, India 5 Administrative Staff College of India, Gender Studies, Hyderabad, India 6 Youth Sexuality, Reproductive Health and Rights Initiative, Ford Foundation New Delhi, India
2

Presenters bios: Anagha Pradhan, health researcher with SAHAJ. Ethel Mendez, Independent Consultant, former Research Awardee with the Evaluation Unit of the International Development Research Centre (IDRC) in New Delhi. Nilangi Sardeshpande, Associate Coordinator at SATHI and a Ph.D. scholar at the Tata Institute of Social Sciences, Mumbai. Ratna M. Sudarshan, Adviser, former Director, at the Institute of Social Studies Trust (ISST), New Delhi. Renu Khanna, founder trustee of SAHAJ-Society for Health Alternatives, Vadodara, India. Shubh Sharma, Research Associate with ISST. Yamini Atmavilas, Chair and Associate Professor, Gender Studies, at the Administrative Staff College of India. Rationale: There is a heightened interest among the international development community to promote equity in their interventions. This interest is also manifest in the evaluation community, as more evaluators are asked to look at equity when evaluating projects and programs. Feminist evaluation has emerged as a lens that can help bring forth inequities and injustice by, among other things, paying close attention to power relations, seeking transformation through the evaluation process, and by looking critically at whose voice is heard and whose isnt in the evaluation process. However, there is little theorizing and evidence to support or discredit the contributions of feminist thinking to evaluation. Objective: This panel seeks to illustrate the re-prioritization that happens when a feminist lens is applied to evaluation. It will present different cases from South Asia to investigate the contributions and areas of concern of applying feminist thinking to evaluation, particularly by examining the role of agency, the wider and deeper impacts of participatory and collaborative approaches elicited by feminist thinking, and its role in enhancing a culture of learning, or an evaluative culture, within implementing organizations. Panel description: Framing the discussion, Ethel Mendezs paper starts with the understanding that good quality evaluation should consider elements of equity and justice. Her paper explains some of the tenets of feminist evaluation and applies them to the standards of the Joint Committee on Standards for Educational Evaluation (JCSEE) to suggest a feminist framework for assessing the quality of evaluations. She draws on evaluation reports from South Asia to exemplify how a feminist lens adds value to the assessment of evaluation quality. Yamini Atmavilas discusses how considerations of agency, or an individuals capacity for self-determination realized through decision and action (Messer-Davidow 1995), in evaluation planning and implementation can generate different narratives and results when evaluating impact. Her paper draws on evaluations of government and non-government programs aiming to reduce vulnerabilities of girls in India. Ratna Sudarshan and Shubh Sharma discuss the value-added by, and challenges of using a collaborative evaluation framework in evaluation of a women-focused development intervention working with women in very poor and geographically dispersed settlements across several states in India, with accompanying socio cultural differences. Despite the challenges, the paper argues that the approach allowed for reflexivity, helped reveal occasional gender blindness, and enhanced the use of findings in the implementing organization. The fourth paper by Renu Khanna, Anagha Pradhan and Nilangi Naren examines whether building evaluative thinking within organisations through participatory processes is influenced by the values and perspectives (for example, gender and rights) on which programmes are based. The paper will explore the strategies for incorporating a gender perspective in the evaluative thinking and the challenges contained therein. Case studies of capacity building around evaluations within two organisations are discussed to elicit lessons. Keywords: Feminist Evaluation; Collaborative Evaluation;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

75

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S1-11 Strand 1

Paper session

Evaluation of international partnerships and S1-11 collaborative networks


O 106

Evaluation of International Commitments Busan Partnership for Effective Development Co-operation


Wednesday, 3 October, 2012
1 7 : 0 0 1 8 : 3 0
D. Svoboda 1
1

Civic Association Development Worldwide, Praha 2, Czech Republic

The international commitments like Millennium Development Goals, Paris Declaration on Aid Effectiveness, Accra Agenda for Action or the most recent one the Busan Partnership for Effective Development Co-operation usually have an ambition of changing development paradigms and bringing better development results. These proclamations are usually called outcome documents; but do they really change the behavior of development actors and do they really contribute to sustainable benefits for the target groups in developing and transition countries? Global consensus endorsed by key actors can be a proxy indicator of success in case there is a real ownership and commitment to follow the agreed principles. However, no document itself but only improved mutual cooperation, behavior and practices can show whether such consensus (or compromise) can really make a difference. Unfortunately, the Theory of Change of such proclamations is commonly missing, there are usually very vague and non-binding targets, the indicators, if any, are often for the means and activities only, and the key assumptions (like real ownership) are not expressed. The evaluation community should focus more on these policy instruments as they should determine new approaches and should significantly influence the system and results of Official Development Cooperation. The paper aims on identifying appropriate outcome (and impact) indicators for these international commitments, and on proposing reasonable evaluation mechanisms, methods and approaches. The Busan Partnership for Effective Development Co-operation is used as an example. The most important message of the Busan Partnership is a promise to shift thinking from the focus on aid effectiveness (aid delivery) to the focus on development effectiveness, it means considering real sustainable benefits for the target groups for the people (in Busan wording sustainable and transparent results for all citizens). All individual paragraphs in the Busan declaration then only describe some preconditions and assumptions for true application of the commitments and for reaching a common vision of the better world (nevertheless, the endorsed vision is not clearly communicated in the declaration). There are at least three critical assumptions for having any success after Busan: 1) True partnership among all development actors, which should deal with such aspects like: Open democratic dialogue; Transparency and predictability; Mutual & shared accountability. 2) Democratic ownership and right-based approaches, which should include: Participatory approaches, with country needs and people and their fundamental rights at a center of all (not only development) policies and actions; Reducing donor-driven conditionalities; Innovative funding schemes including funding for (testing) innovations; 3) Genuine commitments on increasing development effectiveness, it means focus on peoples pillar and sustainable benefits for the target groups instead of prevailing focus on eligible activities and easily measurable outputs; this should include: Change in assessing the change (in recognizing successes from the failures); Participatory evaluation methods; Accountability for long-term results and towards target groups. The paper and related pre-conference workshop should propose possible outcome indicators for the key aspects above and discuss appropriate evaluation questions and methods as well as effective use of evaluation results. Keywords: International commitments; Theory of Change; Outcome indicators; Evaluation approaches; Development Effectiveness;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

76

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 107

Global Innovation through Science & Technology: Fostering International and Regional Engagement and Networking
L. Mikhailova Ph.D. 1
1

CRDF Global, Director of Evaluation, Arlington VA, USA

S1-11

The creation of social networking without borders has strongly affected how international programs that support innovation through science and technology are being shaped and designed by adding a wide range of new opportunities for networking and collaboration. These developments have had direct impacts on evaluation at programmatic and policy levels and have stimulated the development of new evaluation methods and tools as well as new science and technology (S&T) indicators to measure engagement in innovation programs. This presentation will focus on the complex and diverse nature of evaluation methods, techniques, and procedures selected to measure the successes of the Global Innovation through Science & Technology Program (GIST), which is implemented by CRDF Global and supported by the U.S. Department of States Bureau of Oceans and International Environmental and Scientific Affairs. GIST is a network of program initiatives that engages 43 countries in Africa, the Middle East, and Central and Southeast Asia to build and sustain S&T excellence in those regions through locally-established programs that facilitate the networks and leverage the discoveries of the global scientific community in providing innovative ways to improve quality of life and economic prosperity. GIST is a multifaceted, multi-track program that encourages regional networking and engagement by linking innovative ideas, people, and financial resources. The program facilitates new skillset development and capacity-building by providing expert business training and mentorship while promoting entrepreneurial growth and technology transfer initiatives. These initiatives connect innovators and their ideas with seed capital for market-ready technologies that support the commercialization and the creation of networked societies. The GIST program tracks include Startup Boot Camps to strengthen business development; Webinars to link technology commercialization experts with aspiring entrepreneurs around the world; and the GIST Social Networking Platform to discuss innovative business tools and mentorship, to build partnerships that foster new venture creation, and to exchange ideas through popular social networking platforms. Furthermore, GIST sponsors Incentive-Based Competitions to highlight innovative technology ideas for the community via a YouTube-based idea pitch competition and University-Industry Linkage Activities to institutionalize connections through industry outreach partnerships. This presentation will provide an evaluation framework for measuring program results at multiple levels and will walk through selected methodologies for and approaches to data collection, which include Kirkpatricks Four-Level Evaluation Model to assess professional skillset development and application of new skills and knowledge, Social Network Analysis (SNA) to measure established linkages and networking, and other evaluation techniques and tools that are applied to measure engagement and sustainability of network development, in the context of the GIST activities. The presentation will provide the highlights of evaluation findings from selected GIST programming, and will facilitate a discussion about the wide range of S&T metrics that can be developed through the use of social media and other novel data collection methods to measure the building of networked societies. The presentation will be based on participatory, expertise-based, and theory-driven approaches in order to encourage attendees contributions of their diverse, cross-cultural perspectives and opinions.

Wednesday, 3 October, 2012

1 7 : 0 0 1 8 : 3 0

O 108

An Analysis of the Performance Measurement System Applied to the Network of Independent International Agricultural Centers
S. Immonen 1, L. J. Cooksy 2
1 2

FAO, OEKD, Rome, Italy University of Delaware, Newark DE 19716, USA

In the past few decades there has been a general trend to employ performance measurement systems in the public sector with the aim of enforcing accountability and at the same time enhancing efficiency and effectiveness of the operations. Performance measurement systems for research, including universities and national research organizations, are a special case. This paper analyses experiences over six years of a performance measurement system applied to a group of 15 international agricultural research centers. The centers and their international donors form the Consultative Group on International Agricultural Research. Research in the centers spans all areas of agriculture but have a shared mission of enhancing agricultural development. The centers operate in partnership with hundreds of national, regional and international organizations. The performance measurement system was initiated by donors to the centers. Indicators were grouped by those reflecting research results and those indicating the potential to perform (institutional and financial health). Experiences over 6 years showed that: (i) there were large year-to-year fluctuations more likely related to the measurement and adjustments in indicators rather than actual performance; (ii) annual performance appraisals and rewarding were not justified due to large year-to-year fluctuations; (iii) using the indicator information for resource allocation influenced performance reporting and emphasized ranking between centers rather than incentivizing them to work in more collaborative and networked fashion; and (iv) performance measurement information was not used efficiently in other evaluations. The challenges included: (i) inability to capture through annual indicators essential elements of research performance such as quality, relevance, impact, partnerships, data management and capacity enhancement; (ii) designing indicators equally applicable across very different centers with different research mandates; (iii) coming to agreement on benchmarks and indicator targets; and (iv) the interpretation and use of results by donors on annual basis. Using the international agricultural research centers experiences as an example, the paper examines lessons that can be drawn from the objectives, expectations and results of monitoring performance of autonomous, yet networked research organizations through a single uniform performance measurement system. It discusses the suitability of performance measurement as complementary to other research evaluation, the role of research management and donors in monitoring and performance management; and the implications of basing resource allocation on performance measures. The paper relates to the conference theme by specifically noting the challenges and opportunities of evaluation and performance measurement when dealing with a network of complex institutions in different parts of the world and engaged in different kinds of activities, yet sharing the common goals of attaining sustainable food security and reducing poverty (cgiar.org). Keywords: Performance measurement; Performance indicators; Agricultural research; Research for development;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

77

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S4-25 Strand 4

Paper session

Evaluation of income support, credit and insurance S4-25 interventions I


O 109

The evaluation of the service voucher system in Belgium (20052011)


D. Valsamis 1

Wednesday, 3 October, 2012

IDEA Consult, Employment and social policy, Brussels, Belgium

1 7 : 0 0 1 8 : 3 0

The subsidized service voucher system has been launched in 2004 by the Belgian federal government to encourage both demand and supply for domestic services (cleaning, laundry and ironing). This system aims to create new jobs, especially for low-skilled workers. Secondly, the system provides incentives to transform undeclared work to regular jobs in a sector where moonlighting is frequent. Last but not least, the system facilitates the work-life balance for users as it gets easier and affordable for individuals to outsource domestic work. Since the introduction of the system in 2004, the service vouchers have been evaluated seven times by IDEA Consult, on request of the Belgian Federal Ministry of Employment. These evaluations focus on different aspects: the effect of the service voucher system on employment of target groups; the overall cost of the measure for the government; the quality of the service voucher employment; the impact of service vouchers on the employment of users, etc Multiple methods have been used in each evaluation, based on the annual priorities. E.g. An analysis of administrative data, interviews and surveys amongst workers, companies and users, a financial analysis of the measure; an impact assessment of different measures to improve the system. Following general conclusions can be stressed from the different evaluations of the service voucher system: The service voucher system is an important generator of new jobs for target groups (older workers, low skilled and foreigners). In 2010, 136,915 workers where employed in the system, which represents 3.6 % of the population at work. The service voucher system allowed transforming undeclared work into regular jobs in the cleaning sector. The strict regulation of the system has guaranteed a decent quality of work in the service voucher system. The service voucher system is well rooted in the habits of users. In 2010, 760,705 people used service vouchers, which represent 9.1 % of the Belgian population. Moreover, despite several price increases, the number of users is increasing year by year. The use of service vouchers facilitates the work-life balance of users but also has an important impact on the employment rates of users. The service vouchers represent an important cost for the government: In 2010, the system cost 1.4 billion. However, due to the creation of additional jobs, the system also generates returns for the government (e.g. by saving in unemployment benefits, the surpluses in the social contributions and income taxes, etc...). Our precise estimations showed that these direct and indirect returns reduce the cost of the measure for the government with 50 %. In a context of demographic change and of a need to increase the working hours of persons on active age, the Belgian service voucher system is one of the best good practices that could be transposed to other European countries. Moreover, the Belgian service voucher system is the only subsidized system for domestic services that has been evaluated in a such systematic way. The conclusions of these evaluations might therefore be very interesting for other European countries. Keywords: Service voucher; Evaluation; Domestic services; Undeclared work; Work-life balance;

O 110

Evaluation of EIB Intermediated Lending to SMEs in the EU from 2005 to 2011


U. H. Brunnhuber 1
1

European Investment Bank, Operations Evaluation, Luxembourg, Luxembourg

The paper will present preliminary findings of the still ongoing evaluation on support to small and medium-sized Enterprises (SMEs) via Financial Intermediaries (FIs) by the European Investment Bank (EIB, the Bank). The EIB is the policy bank of the European Union (EU). Its mission is to support EU policies (www.eib.org). Policy makers tend to take it for granted that supporting SMEs is good for economic growth and employment creation. After all, it is argued, SMEs are the backbone of the European Unions economy. According to the European Central Bank (ECB), 99.8 % of firms in the euro area are SMEs. Altogether, they account for 70% of employment, making a significant contribution in terms of skills and job creation. SMEs are also a driving force of Europes growth, accounting for 60 % of its turnover, generating innovation and enhancing competitiveness. In addition, they can be a key factor of local and social integration. Many, if not all, IFIs have programmes and products in place to support SMEs, striving inter alia to improve access to finance for them. The EIB likewise supports SMEs. Its tools are based on the principle of intermediation through other financial intermediaries. This on-going evaluation assesses how the Bank, utilising two specific products, global loans and since 2008 loans for SMEs, is trying to reaching out to SMEs across the EU27. The evaluation intends to reflect on the implementation of the EIBs strategies with respect to SMEs in the context of the financial crisis and to provide insights into the benefits accruing to SMEs. The paper discusses the structuring and emerging findings of this still ongoing evaluation. In particular it highlights the dynamic nature of the evolving policy context in light of the financial crisis. It also covers how the evaluation attempts to reach out to SMEs through systematic site visits and through a large scale SME survey. The evaluation is based on a representative sample of 20 operations out of ca. 500 financed by the EIB over the period 2005 to 2011. Through these 20 operations to date some 15,000 SMEs have received EIB funding
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

78

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

through 18 FIs in 11 EU countries. Systematic site visits as well as a concurrent large scale SME survey across these beneficiary SMEs are conducted. This survey is being carried out by a professional surveying firm. As the evaluation is on-going and has not yet been presented to the EIB Board of Directors, the paper focuses on the rationale behind its structuring, aspects of methodology, and emerging findings. Keywords: Small and medium-sized Enterprises (SMEs); Global loans; Loans for SMEs; Financial intermediation;

S4-25
O 111

Comparing Program Theories and actual developments. The case of Citizens Income in Naples
R. Lumino 1
1

University Federico II of Naples, Gino Germani, Naples, Italy

Wednesday, 3 October, 2012

The paper presents the main findings of my doctoral research, concerning the evaluation of a local minimum income scheme, known as Citizens Income. Introduced in Campania from 2006 to 2010 as part of a temporary and experimental project, it was a means tested measure addressed to low-income families. The Citizens Income comprised the provision of a fixed allowance together with the inclusion in social and non-mandatory activation programs. The paper focuses on the evaluation of the social programs in the Naples area. These actions included: counseling and support to beneficiaries to access existing social and health care services, professional and education counselling, activities for promoting empowerment. The analysis is based on the Theory-Based Evaluation (TBE). It requires surfacing the assumptions on which the program is based in considerable detail: what activities are being conducted, what effect each particular activity will have, what the program does next, what the expected response is, what happens next, and so on, to the expected outcomes. The evaluation then follows each step in the sequence in order to assess whether the expected mini-steps are actually experienced or not. It helps knowing not only what the outcomes of a program are but also how and why those outcomes appear or fail to appear. TBE provides information about the existing mechanisms between program activities and the achievement (or the non-achievement) of expected results and allows to identify the weaknesses of the program and / or its implementation. Therefore, it suggests direction of transformation for the improvement of program planning as well as for the development of prospective different and more effective strategies. A major aspect of the evaluation project is the adoption of a participatory approach, based on the active involvement of different stakeholders (program designers, practitioners and beneficiaries). The participatory approach is key element for understanding the different languages used by the wide range of actors involved, with the different nuances of meaning, beliefs and culture backgrounds at stake. The research study was conducted over three years, following the various stages of implementation of the program. Hunting casual mechanisms has required solid understanding of decision making context as well as deep knowledge of programs inputs and critical review of desired program goals.We have built Program Theories by an inductive process, using different data sources: official documents produced by the municipal administration, observation, regular meetings for discussion and coordination with practitioners and Program designers, focus groups with various stakeholders. Next, we have compared the expectations generated by Program Theories with empirical data of monitoring and open-ended interviews with beneficiaries, to give body and voice at Program Theories and to think about gap among Program Theories and actual developments. Particular emphasis will be given in the paper on understanding the mechanisms by which the given program produces its effects. Keywords: Casual mechanisms; Theory Based Evaluation; Social work;

1 7 : 0 0 1 8 : 3 0

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

79

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S5-27 Strand 5

Panel

Joint Evaluation of Dutch Development NGOs


S5-27
O 112

Joint Evaluation of Dutch Development NGOs


W. Rijneveld 1, R. Rutten 2, K. Chambille 3, D. De Groot 4, Y. Es 5, P. Das 6, R. Van Zorge 7, I. Guijt 8, H. Huyse 9
1 2 3

Wednesday, 3 October, 2012

1 7 : 0 0 1 8 : 3 0

Woord en Daad, Result Management and Learning, Gorinchem, Netherlands Cordaid, Den Haag, Netherlands Hivos, Den Haag, Netherlands 4 ICCO, Utrecht, Netherlands 5 Oxfam Novib, Den Haag, Netherlands 6 ZOA, Apeldoorn, Netherlands 7 RutgersWPF, Utrecht, Netherlands 8 Learning by Design, Randwijk, Netherlands 9 HIVA KU Leuven, Leuven, Belgium

Karel Chambille , Peter Das , Yvonne Es , Dieneke de Groot , Mirjam Locadia , Wouter Rijneveld , Rens Rutten , Ruth van Zorge , Irene Guijt, Huib Huyse Introduction: From 2011 2015 74 Dutch Development NGOs organized in 19 alliances receive 1.9 billion euros of Dutch development cooperation funds (co-financing framework MFS2). The relatively strict evaluation requirements were hard to meet by individual organizations. Therefore, the 19 alliances designed one overall evaluation proposal that covers the whole portfolio. Setup: The commissioning of the evaluation needed to be done independently from the organizations who took the initiative for this joint evaluation. A steering group consisting of independent international experts is formed. This group takes decisions about Terms of References and evaluation reports. They use the expertise of the Netherlands Organization for Scientific Research (NWO) to manage the evaluation research. The steering group is advised by a group of independent advisors. The NGOs formed a foundation that relates to the steering group and to the Dutch government on behalf of the NGOs. Content: The evaluation design distinguishes four levels of results: 1) civil society at large, 2) capacities of partner organizations, 3) changes in the livelihoods of people (related to the MDGs), and 4) international lobby and advocacy. The first three groups of results are evaluated in eight countries by evaluation teams (one for each country) that have expertise to evaluate these in a robust and coherent way. This will enable the teams to also research the relations between changes in organizational capacities, changes at population level and changes on civil society at large. A synthesis team has the task to harmonize the approaches of the eight country teams in order to allow comparative analysis. The evaluation requirements demanded that the difference in difference approach are used. Sampling: Sampling of the eight countries was done through a process of random selection under the following conditions: each of the Dutch alliances is present in at least one country and different types of countries are included. Out of a total of 453 projects in these countries that included the strategy direct poverty alleviation (not only civil society strengthening and lobby and advocacy), a random sample of 60 projects was selected where field research is done. Organizational capacities are researched in a randomly selected sample of 63 partner NGOs. The country studies are done in 2012 and will be repeated in 2014. The civil society part makes use of the Civil Society Index developed by Civicus. The partner NGOs capacities part makes use of the five core capabilities approach developed by ECDPM. Panel contributions 1. Rens Rutten, structure, design and contents of the evaluation. 2. Irene Guijt, outsider perspective with considerations on methodology: challenges to validity, relation with intervention theories, possibilities and limitations of quasi-experimental design 3. Huib Huyse, outsider perspective with considerations and challenges for learning at individual, organizational and sector level 4. Karel Chambille, perspective of usage by the Dutch NGOs and partner organizations Discussion led by Wouter Rijneveld These persons form the Internal Reference Group: evaluation managers from NGOs involved, who took the initiative for this evaluation. Keywords: Joint Evaluation; Large scale evaluation; Multiple level evaluation; Development NGOs; Civil society;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

80

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S1-14 Strand 1

Paper session

Network effects on evaluation and organization II


S1-14
O 113

Virtual Networks and their Potential Contribution to Dissemination of Monitoring and Evaluation Knowledge and Results of Social Policies in Brazil
Wednesday, 3 October, 2012
M. P. Joppert 1
1

1 7 : 0 0 1 8 : 3 0

Brazilian Evaluation Agency / Brazilian Monitoring and Evaluation Network, Brasilia/DF, Brazil

Concise Bios: Civil Engineer (USP, Brazil, 1987) and Master in Public Administration (IUL, Lisbon, 2010). After 21 years working as project manager, I am today involved with the M&E area, working as an independent consultant and volunteer in national, regional and international organizations and networks Resume: A big challenge faced by areas dedicated to support monitoring and evaluation in public organisations is to disseminate their knowledge and increase the use of their results. In highly complex policies, such as those in social policy areas, with so many interfaces and geographically dispersed, it is even more difficult. Some technological instruments can support these organisations and one of the most useful is virtual networks. The paper will present a comparative study of virtual networks used as capacity-building instruments in order to analyze the reasons why some of them fail. We will look especially at RENMAS the National Network for Monitoring Social Assistance Services in Brazil, created in November 2008. This network operates in the Department of Evaluation and Information Management of the Brazilian Ministry of Social Development and the Fight Against Hunger SAGI / MDS which has also the task of supporting all the areas of the ministry in monitoring and evaluating their programs, projects and actions. The social assistance programs and services have a wide diversity of stakeholders: public managers in the three spheres of government (federal, estate and municipal), private actors and NGOs, spread over 5,500 municipalities. The RENMAS was created as an interaction and cooperation environment among public managers from the three spheres: a collective learning space focusing on the improvement of the monitoring activities. Experience shows that networks can be used as effective tools for capacity-building, connection, cooperation and collaboration among members of specific communities. However, there are certain characteristic attributes that should be present to ensure theirsuccess and sustainability. These attributes were identified in specific literature and, based on that, 11 networks were compared. The conclusion is that many virtual capacity-building networks are created but only a part of them is sustainable and the key success factors are: a clear definition of their mission and vision; a clear definition of their target audience; a clear definition about their governance (leadership and organisational structure); the involvement of key stakeholders; the adoption of a good communication platform to guarantee adhesion, participation and integration between members; the existence of funding; their use as capacity-building instruments as well as articulation instruments between stakeholders. Even though considering that these attributes are difficult to reach simultaneously, especially in public organisations, the analysis of successful networks shows that in general they met these key factors and some recommendations were made to SAGI/MDS to improve the use of the RENMAS. In addition, the paper will hopefully help those organisations that aim to use virtual networks to disseminate monitoring and evaluation findings and knowledge and build capacities in M&E among their stakeholders to carry out successful initiatives. Keywords: Virtual networks; Capacity-building; Social policies; Monitoring and evaluation;

O 114

The Rise of Superstars Evaluating the network and career effects of artists participation in the documenta
G. M. Hellstern 1
1

University of Kassel, Department of Business and Economics, Kassel, Germany

Lately the number of large contemporary art exhibition has increased around the world as did evaluation research on the economic impacts of those events. The rising research interest is partly due to the hope to find and estimate the direct and indirect economic effects of the event for a city or region (Towse 2011, Frey 2011, Seaman 2011) and more recently as part to determine their potential contribution to the creative economy as a major growth industry (Caves 2000, Florida 2002, UNCTAD 2008, UNESCO 2009, EU 2009). But we rarely find evaluation on the impact of the events for the career of the artists. What are the benefits for the artist to be selected? How important is the previous network of relations? Past impressing research on artist labour market (Alper 2006) and art prices (Mandel 2009) have so far little contributed to the determinants of potential careers and networks of artists. As part of our evaluation of the documenta between 19922012 (Hellstern 2010, 2012), we conducted an evaluation of the influence of artists participating in the documenta on their future career and network. The study relates participation in the documenta to the ranking, prices and newtwork extensions in the years following the documenta The results of the evaluation for a time span of twenty years indicate a strong positive relation to the event, although the influence seem to be declining in the more recent events. Keywords: Contemporary art; Art festivals; Aart evaluation methods; Network and longitudinal analysis;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

81

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 115

Networked management model: sustaining operation of an evaluation network without an executive office
N. Kosheleva 1
1

Process Consulting Company, Moscow, Russia

S1-14

Wednesday, 3 October, 2012

1 7 : 0 0 1 8 : 3 0

The International Program Evaluation Network (IPEN) was established in 2000 to promote the profession of evaluation in the Commonwealth of Independent States (CIS) region. IPEN was founded by several organizations, mostly NGOs, from several CIS countries. At that time national legislations in any of the CIS countries did not (and still does not) recognize such an entity as a regional organization. Operation of foreign organizations in all countries was restricted. Given these challenges, IPEN has adopted a model of networked management where the founding organizations have assumed a shared responsibility for managing IPEN operation. For example, IPEN has implemented a number of projects supported with grants from international donor organizations. In each case grant management was carried by one of the founding organizations. Decision on which of the organizations will manage the grant is made by the IPEN Board. The Board also establishes a project committee to oversee the project implementation. There were also several instances when IPEN partnered with non-founding NGOs interested in promotion of evaluation to run IPEN regional conferences. Networked management model adopted by the IPEN has allowed the network to reach all areas of the CIS region. This model may be useful for other emerging evaluation networks, both regional and national, as it minimizes operational costs and builds a sense of shared ownership in the network. The presenter serves on the IPEN Board since 2009, since September 2011 as a Deputy Chairman of the Board. She has been working in the field of evaluation since 1996. Keywords: Evaluation network; Operational management; Networked management model;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

82

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S5-14 Strand 5

Paper session

The interaction of evaluation, research and S5-14 innovation I


O 116

Rise of Games Industry Impact Assessment of Public RDI Activities


P. Pesonen 1

Wednesday, 3 October, 2012

Tekes, Helsinki, Finland

1 7 : 0 0 1 8 : 3 0

The games industry in Finland has been growing significantly during the last few years. It has introduced several successful games, like Habbo Hotel, Alan Wake or Angry Birds. Lately, the industry has received lots of publicity and it has even been coined as a new pillar of Finnish economy. The industry has attracted venture capital and other private funding. The turnover and the number of companies have grown steadily even during the economic downturn. Technology development has been rapid and completely new markets for social and mobile games have opened. At the same time, competition is intense, lifecycles of the products are short and for each success there are tens of failed games. What has been the role of RDI and especially publicly financed RDI by an innovation agency in the evolution of the sector? Recently, we have carried out three separate studies of the subject. However, they have some common elements, for example games industry is a cross-cutting theme in all of them. The first study was the evaluation of three public RDI programmes in software, mobile solutions and gaming. The first programme was carried out in 200003, so the evaluation also captured a longer term perspective to study achieved impact. The methods used were document analysis, interview, survey, workshop, case studies and statistical analysis. In another study, SfinPact, the most important innovations in the sector have been identified. SfinPact study utilized an unique database consisting of nearly 5000 recognised innovations. Main drivers and the role of a public funding agency (Tekes) over time were analysed. Third study concentrated on the innovation capacity-building and what impact Tekes has had and could have in the future. The study looked at the innovation systems on the national level and made comparisons with successful nations. On the other hand it looked at innovation support needed from the viewpoint of individual organisations. In this presentation, my aim is to synthetize the findings from these three studies. Keywords: Innovation; Games industry; Evaluation;

O 117

Efficiency evaluation on Brain Korea 21 program


S. C. Byeon 1, S. J. Kim 1, Y. S. Ko 1
1

KISTEP (Korea Institute of Science and Technology Evaluation and Planning), HRST Policy Division, Seoul, Republic of South Korea

In the late 1990s, Korean government in response to concern over the relatively low standing of the nations universities and researchers, launched the Brain Korea 21(BK21) Program. BK21 program seeks to nurture globally competitive research universities and to breed high-quality research manpower in Korea. It provides fellowship funding to graduate students, post doctoral fellows, and contract-based research professors who belong to research groups at top universities. In Phase II, which began in 2006 and is scheduled to run through 2012, BK21 allocates about US $ 260 million a year. In this paper, the efficiency of the BK21 program is analyzed. In order to find out the objectives of the programs are efficiently accomplished, a survey is carried out targeting professors and students participating in the BK21 program. Based on the survey result, it reviews the quantitative efficiency by drawing up the key indicators to measure the efficiency of the program and reviews the qualitative efficiency by analyzing the consistency between contributiveness and importance. It is found that the quantitative efficiency of the BK21 program has substantially improved. That is, the quantity and quality of papers published by graduate students has been greatly enhanced compared to the funding by the BK21 program. Such kinds of improvements are resulted from more prevailed academic atmosphere for graduate students, and more expanded support for doctoral course than masters course. In addition, the qualitative efficiency of the BK21 program is analyzed. In specific, students recognize much higher efficiency of the BK21 program than professors. Furthermore, students in the liberal arts or social sciences departments feel higher efficiency than those in the engineering or natural sciences departments, while the professors show the opposite. In this context, sub-programs should be diversified in order to increase the level of satisfaction for professors and students, and the natural sciences or engineering departments and liberal arts or social sciences departments. For example, as an education program for capacity building for a student, the share of individual support is required to be expanded, especially in the liberal arts and social sciences departments. Moreover, in order to increase research performances of both professors and students together, it is needed to increase the number of students in doctoral courses, Post-docs, and research assistants. Keywords: Program evaluation; Efficiency evaluation;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

83

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 118

A framework for analysing the societal impact of research and innovation


P. Pesonen 2, T. Raivio 1,, K. Viljamaa 3
1 2

S5-14

Gaia Consulting Oy, Helsinki, Finland Tekes Finnish Funding Agency for Technology and Innovation, Strategic Intelligence, Helsinki, Finland Ramboll management consulting Oy, Helsinki, Finland

The interest in assessing the impact of research and innovation has been continuously increasing due to the need to understand the role of innovation in the competitiveness and renewal of economies. The research has been motivated by the need to find evidence on the impacts of the public spending on research, development and innovation, as well as to link research and innovation policy measures with broader objectives in the society. For a long time, Tekes the Finnish Funding Agency for Technology and Innovation, the Academy of Finland, Finnish Research and Innovation Council as well as several consulting think tanks have been developing a unified model of impact chains from RDI inputs to societal impacts in four impact areas: Economy and economic renewal, Environment, Well-being, and Skills and Culture. These impact areas cover the main societal challenges and opportunities that can be seen as societal objectives in Finland. The model describes the impact chains by linking inputs, activities, outputs and impacts at each impact area. Rather than being a complete explanation of innovation, the model aims at a better understanding of the impact chain. In this paper we present the experiences and outcomes from an exercise that aimed at operationalising the model. In this excercise, the phenomena related to the impact chains as well as the indicators describing the state of the phenomena in a quantitative way were identified in a series of broad-based stakeholder workshops. The work was supported by an international benchmarking and a concrete implementation plan. Besides selecting existing indicators, also new indicators were proposed. From the evaluation perspective, impact indicators are of outmost importance since they define how we measure impact. Indicator based impact measurement clarifies strategic goals, creates transparency, and improves communication in operating environment and financing. Simplified models and indicator selection also require understanding of the limitations of the approach. Central challenges encountered here where the causality in the innovation chain, attribution of specific impacts to specific inputs, international linkages, time scale of indicators, and the breakdown of impacts to particular socio-economic target. On an international level, there are very few indicator activities that genuinely link socio-economic impact factors to research and innovation activities. For some representative examples, see OECD, 2007 and Statistics Canada, 1998 . The work carried out here seems to be the first attempt to operationalize such a model. In alphabetical order Also Adjunct Professor, Aalto University See: Better results, more value A framework for analysing the societal impact of research and innovation. Tekes review 288/2011 OECD, 2007. Science, Technology and Innovation Indicators in a Changing World. Responding to Policy Needs. Statistics Canada, 1998. Science and Technology Activities and Impacts: A framework for a Statistical information System Keywords: Impact assessment; Indicator; Impact model;

Wednesday, 3 October, 2012

1 7 : 0 0 1 8 : 3 0

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

84

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-15 Strand 2

Paper session

Improving evaluation practice


S2-15
O 119

Customer satisfaction survey of evaluations services as a means towards professional evaluation and ensuring evaluation independence and credibility
Wednesday, 3 October, 2012
C. Bugnion de Moreta 1
1

1 7 : 0 0 1 8 : 3 0

Subur Consulting S. L. , Sitges, Spain

Bios: Economist with twenty five years of work experience and sixteen years of evaluation practice, director of Subur Consulting S. L., track record of some sixty evaluations undertaken for donors, UN agencies, international organisations, NGOs and private sector companies. Specific interest on participatory evaluation processes leading to positive change and training on M&E. Rationale: show how client evaluation of evaluation services can be a tool for networking, contributing to evaluation practice, transparency and improving evaluation quality. Objectives: show that feedback by customers having contracted evaluation services and using a number of recognised quality criteria for evaluations contributes to transparent evaluation practice and can be a useful source of shared information for networking. In 2003 we introduced for all our contracts systematically a request to have our evaluation services evaluated by our customers once the evaluation was completed and the final report accepted. To date of 43 evaluations undertaken we received 27 surveys forms duly completed (62,8 %) and requested an additional feedback in 3 cases (7 %) but did not receive any response. In an additional 13 cases (30,2 %) we did not think that the TOR or the management of the evaluation were conducive to warrant a response from the client in terms of customer satisfaction (e.g. compliance and mandatory evaluations with no buy-in or commitment to results from the client or lack of ownership of the evaluation). This customer satisfaction system is both a way of ensuring commitment to evaluation work and appropriation by the client (since a number of questions have to do with the results and use of the evaluation services by the client), as well as a source of references for other potential evaluation clients. In line with the commitment to transparency, all the evaluation reports that were declared as public have been placed on our website or a link to download the reports appear on the website. This concerns 27 evaluation reports (62,8 %), while in 11 cases (25,6 %) clients declared the reports to be internal, and in five cases (11,6 %) the public posting of reports was not part of the assignment objectives and therefore a Not-Applicable rating has been given. Each customer satisfaction survey is available and can be consulted on-line on our website, in addition to the table containing average and individual ratings obtained. We are committed to show the results obtained from the survey no matter how low the results are, and it is understood that there will always be a range of results depending on the nature and type of evaluation undertaken. At the same time this system could be replicated by those evaluation professionals who are also committed to transparency and willing to share and disseminate evaluation practice. Details available at www.suburconsulting.es Keywords: Customer satisfaction; Transparency; Accountability; Information dissemination; Evaluation practice;

O 120

Quality evaluations as seen from the perspective of an external evaluator


C. Bugnion de Moreta 1
1

Subur Consulting S. L., Sitges, Spain

Bios: Economist with twenty five years of work experience and sixteen years of evaluation practice, director of Subur Consulting S. L., track record of some sixty evaluations undertaken for donors, UN agencies, international organisations, NGOs and private sector companies. Specific interest on participatory evaluation processes leading to positive change and training on M&E. Rationale: Based on a review of 59 evaluations undertaken since 1995 for a range of UN agencies, donors, NGOs and international organisations, present a summary of evaluation practice of these 59 cases and discuss ways to further improve evaluation management and practice. Objectives: using eight different criteria to appraise the quality of the evaluations undertaken, review and discuss the possible improvements towards ensuring quality evaluations. I have undertaken from 1995 until 2011 a total of 59 evaluation missions. I have reviewed these evaluations and rated them against to the following seven criteria: 1. Discussion of TOR with evaluator (Yes or No) 2. Adequacy of TOR and objectives (Yes or No) 3. Independence of evaluation (Yes, No, average)
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

85

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

4. Quality control mechanism level (low, average, good) 5. Use of evaluation (Yes, No, N/A) 6. Management response (Yes, No, N/A) 7. Evaluation placed in public domain (Yes, No, N/A) In order to protect clients identity and confidentiality, the data have been protected and only the results of the ratings are mentioned. For statistics a separate count is made for UN agencies, donors, NGOs, international organisations and other clients. The purpose is to present some of the most common shortfalls in evaluation practice and discuss ways forward to contribute to improved practice methods and discuss how to fill the gaps. Yes TOR discussed 15 44 28 12 31 % 25,4 % 74,6 % 47,5 % 20,3 % 52,5 % No 44 15 5 11 21 % 74,6 % 25,4 % 8,5 % 18,6 % 35,6 % 26 36 7 average Independence 49 low Quality control level 27 45,8 % 83,1 % 6 average 26 44,1 % 10,2 % 4 high 6 10,2 % 6,8 % 44,1 % 61,0 % 11,9 % N/A %

S2-15

Wednesday, 3 October, 2012

1 7 : 0 0 1 8 : 3 0

TOR adequate Use of evaluation Mgmt response Public domain

A possible way forward based on these results (and others to be discussed) is that the EES could consider providing peer review services of evaluations so that neutral and external experts can provide quality control regarding the evaluation products. While now most strategic and large evaluations are directed by a management board or steering committee, instead of a single person as evaluation manager, the submission of expert advice from people not related to the object being evaluated or the institution is likely to provide a fair appraisal based on solely technical grounds while also enhancing the level of quality control of the evaluation. A group of peer reviewers could be identified from among the EES members to provide such a service. Keywords: Quality control; Independence; Peer review; Transparency;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

86

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-36 Strand 2

Panel

Innovative Approaches to Impact Evaluation: S2-36 Session 2


O 122

Innovative Approaches to Impact Evaluation: Session 2


E. Stern, J. Mayne, K. Forss, B. Befani, N. Stame, R. Davies

Thursday, 4 October, 2012

9 : 3 0 1 1 : 0 0

Rationale: For the last year an international team of leading evaluation researchers and practitioners has been working together on a study commissioned by the UKs Department for international Development (DFID) with the aim of broadening the range of Impact evaluation designs and methods. Impact evaluation has been vigorously debated in the evaluation community recently, with advocates of experimental methods often arguing that only their approaches are rigorous and robust. DFID wanted to identify and assess ways of evaluating impact that could be applied to its more complex programmes where it had found that experimental methods and RCTs were not suitable. They were particularly interested in designs that were qualitative, not statistical and theory based that could be demonstrated to be high quality. The study reviewed actual evaluations, established and emergent methods and analysed the attributes of programmes drawing on complexity and organisational theory. The team was supported by advisors drawn from practicing evaluators, social science methodologists and philosophers of science. This will be the first dissemination of a major study that addresses important issues for the evaluation community. It fits well within the conference strand on Evaluation research, methods and practice. Although the study was commissioned to support international development evaluations it took a cross domain perspective and these sessions will be of relevance to all those interested in innovative designs to evaluate the impacts of policies and programmes. Proposers (all members of the study team): Elliot Stern is an evaluation practitioner and researcher based in UK. He edits the journal Evaluation; is visiting Professor at Bristol University and Professor Emeritus at Lancaster University; and is a past President of the EES. He was the team leader for this study. John Mayne practices as an evaluator in Canada. He was previously at the Canadian Treasury Board and the Office of the Comptroller General. He has been developing approaches to Contribution Analysis for many years and is also an expert in Results Based Management. Kim Forss works as an independent evaluation consultant based in Sweden and has co-edited the recently published book Evaluating the Complex. He has been President of the Swedish Evaluation Society and is a Board Member of the European Evaluation Society. Barbara Befani is an evaluation methodologist and consultant with a particular interest in frontier methods and designs in evaluation including mathematical approaches to small-n situations. She has been a methodological advisor to Italian and EU public programmes. Nicoletta Stame is Professor at the University of Roma Sapienza, is past President of the European Evaluation Society and of the Italian Evaluation Association. She has written about evaluation methodology with particular reference to theory based and impact evaluation approaches. Rick Davies is an independent Monitoring and Evaluation Consultant based in Cambridge, UK. His clients are international development aid organisations (multilaterals, bilaterals and INGOs). He has been managing the Monitoring and Evaluation NEWS website since 1997. Abstract: What difference does complexity make when evaluating for impacts? (introduced by Kim Forss): Evaluators and policy makers commonly describe programmes as complex. Real programmes that are described as complex can have various attributes. They may overlap with other programmes; have unpredictable trajectories; extend over a long time-scale; or be non-standard e.g. locally customised. Different attributes that make up what we call complexity have implications for how the impacts of programmes can be evaluated. This will be illustrated through specific programme examples. Assuring the quality of IE designs and methods (introduced by Rick Davies): Policy makers need to be confident that the evaluations they rely on are of high quality. Some approaches to quality assurance in research and evaluation emphasise the process i.e. the way an evaluation is conducted; whilst other approaches emphasise technical aspects of methods and their use. At a high level of generality common standards can be applied across all designs and methods likely to be used in IE. However evaluation settings differ and the best mix of qualities or standards may have to be assessed in each case. Learning from IE in complex settings (introduced by Nicoletta Stame): Advocates of IE want to learn lessons, to generalise to future policy settings. This is also the underlying assumption of Evidence Based Policy. There are limits to this kind of learning in most IE settings. This is partly a matter of what scientists call external validity. It is also a consequence of the extent of complexity and how indeterminate outcomes are likely to be. Learning through evaluation strategies that are participatory and formative is one way forward. Looking to learn lessons about mechanisms may be another. http://mande.co.uk/

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

87

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-37 Strand 2

Panel

What is excellent? The challenge of evaluating S2-37 research


O 123

What is excellent? The challenge of evaluating research


A. Etherington 1, C. Coryn 2, K. Hobson 2, D. Schroeter 2, P. Mateu 2, S. Singh 3
1

Thursday, 4 October, 2012

9 : 3 0 1 1 : 0 0

IDRC, Evaluation Unit, Ottawa, Ontario, Canada Western Michigan University, IDPE, Kalamazoo, USA 3 Amaltas, New Delhi, India
2

Research can strengthen policy, programs, and governance by encouraging open inquiry and debate, empowering people with new knowledge, and enlarging the array of solutions available. Funders and citizens want funding for research to go to organizations with the most promise to deliver excellent ideas, understandings, and solutions. Identifying and attaining excellence requires evaluating research. Existing tools for evaluating research have largely been designed for academic research and tend to focus on evaluating proposals or reports. The majority of these approaches rely on simple metrics, such as whether the research was published in a reputable journal or been cited. These approaches are increasingly being criticized for being conservative, arbitrary, and political and of stifling innovation and risk-taking in research. Perhaps this is nowhere more problematic than in international development research. Excellence here is likely not based on whether or not the research appeared in a top journal, but rather, whether it: (a) grapples with central problems, (b) is done with rigour and credibility, and (c) has findings that can be used to make decisions. For this, existing research frameworks are inadequate. The panel explores the landscape of research evaluation and suggests opportunities for creating frameworks that can identify research that is underexplored, ground breaking, or influential in other ways. The panel flags the need for frameworks that include the processes and outcomes of research, in different and changing contexts. The panel asks: 1. What is current practice in evaluating research excellence? 2. What does research excellence mean in an international development research context? 3. What approaches are suited for evaluating excellence in international development research? Kristin Hobson and Pedro Mateu present trends in evaluating research, based on a document review and interviews with key stakeholders. The summary will focus on (a) purpose of research excellence frameworks, (b) existing frameworks and in which context they can be applied, and (c) strengths and weaknesses of the existing frameworks. Chris Coryn and Daniela Schroeter build on the summary of trends and existing frameworks for assessing research excellence in terms of their usefulness for (a) organizational and public accountability purposes and (b) project- and program-level learning and improvement. In doing so, they discuss several alternative frameworks that are flexible enough to accommodate a variety of users and uses with a particular emphasis on evaluating research excellence in the context of international development. Suneeta Singh presents perspectives on the evaluation of research excellence from the global South based on surveys and key informant interviews. She draws on the views and practice of individual researchers and leaders of research organizations. In addition to summarizing the current state of evaluation of Southern research, she discusses the limitations of current practice, and proposes new frameworks for evaluating research excellence. Amy Etherington is panel chair and discussant. This panel is complimented by the panel, Evaluating research excellence for evidence based policy, and forms the first part of a two part discussion. Keywords: Research evaluation; International development; Evaluation frameworks; Evaluating research excellence;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

88

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S4-04 Strand 4

Paper session

The impact of values and dispositions on evaluation S4-04 approaches


O 124

Knowledge production and realist evaluation in social services: a case on adult social work
Thursday, 4 October, 2012
P. Saikkonen 1

9 : 3 0 1 1 : 0 0

National Institute for Health and Welfare, MEKA/FinSoc, Helsinki, Finland

A relation between politics and evaluation is a topic under discussion. An argument here is that politics have an effect in the knowledge production; it defines a line of the evaluation research. The knowledge production is an essential element as the solutions of problems are dependent on the definitions of problems and how the problems are presented for decision-makers. A concept of knowledge production describes the various ways of formulating knowledge in scientific research as well as in everyday practices. The question of knowledge production is even more fundamental in the era of information technology and large databases in as much need of information is apparent in the process of welfare services evaluation. Furthermore, professional, institutional, political and cultural connections have an influence on the knowledge production. An aim of paper is to ponder potential impacts of knowledge production in the effectiveness evaluation of adult social work. The paper leans on the experiences of the project of Finnish National Institute for Health and Welfare. The project has been studied and developed measures for the effectiveness evaluation of adult social work with the local partners. The results of the project are reported elsewhere; however the project offers opportunity to discuss different styles to organize knowledge production in adult social work and the effects of organizations on the evaluation. The paper will illuminate knowledge practices and knowledge production that are related to these cases. Furthermore, the object is to combine these concepts with the framework of the realist evaluation. At least in the field of environmental policy, technical knowledge is often a domineering position at expense of other type of knowledge in decision-making at the municipal level. Furthermore, in welfare services the quantitative measures may have more weight than qualitative approaches. In addition, previous studies have been shown that knowledge of laymen is often ignored in the decision-making and this might be partly because of research and its conventions. The realist evaluation strives to reduce complexity by showing what works for whom in what circumstances and in what respects and how. The framework of the realist evaluation might be a useful approach in evaluation of welfare services yet there are challenges to bring the relevant knowledge for the decision-making. Thus it is necessary to clarify the links between knowledge production, practices and decision-making. Keywords: Knowledge production; Public policy; Adult social work; Realist evaluation;

O 125

Evaluations in the field of homelessness: a comparison between evaluations from the US and the EU-countries
V. Denvall 1
1

Lund University, Lund, Sweden

This presentation aims at comparing evaluations from the US and the EU in a decisive welfare area. In the European evaluation-community we claim to speak about a domination of influence from research in the US. This is a say among us evaluators but few seem to have investigated this further on. I suggest that the field of homelessness gives an opportunity to carefully examine the methods chosen by evaluators, the design of the evaluation, strategies and assessment criteria and impact of the evaluations. In terms of cogency a comparative approach in the analyses will be performed. We have few analyses in the evaluation field of how methods and designs are linked to values and criteria and the extent to which evaluators on both sides of the Atlantic Ocean apply criteria in the same field. Homelessness as a political and social dominion has a number of features that make it particularly illustrative for this conference where we want to explore a European model in evaluation. In addition to being a wicked problem, it offers several analytical avenues. How do these characteristics affect the evaluation of programs and projects aspiring to combat homelessness? Dissimilar pictures and solutions regarding homelessness in the US and in EU countries will likely affect recommendations given as a result of performed evaluations. The homelessness problem will be evaluated at different levels depending on ontological models; structural and/or individualistic or a focus on pathways into and out of homelessness. According to a recently published review of research there is a substantial lack of consensus on definitions and the ways in which homelessness should be fought (Busch-Geertsema, V. et al (2010) Homelessness and homeless policies in Europe: lessons from research. Brussels: Feantsa). For this study, the empirical base is a sample of the most cited evaluations of homelessness programs published in professional journals between 1996 and 2010. Most of these evaluations are from the US. The analysis is not yet completed but data shows that US evaluations more often seem to use large-scale national programs and quantitative methods, whereas EU evaluations more often use smaller sample sizes and qualitative methods. In the US and the EU, evaluations of homelessness programs seldom use analytical approaches. Keywords: Comparative analysis; Homelessness; Methods; criteria;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

89

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 126

Measuring the Contribution of Volunteerism to Development: A United Nations Volunteers (UNV)s Approach
Dieudonn Mouafo PhD 1
1

Chief of UNV Evaluation Unit, Hermann-Ehlers-Strasse 10, 53113 Bonn, Germany

S4-04

The lack of standard measuring methodologies to assess the contribution of volunteerism to development and peace undermines the understanding of the scope, benefits and potential of volunteerism (State of the Worlds Volunteerism Report, 2011). Measuring the impact and contribution of volunteerism is useful for effective policy-making, resource mobilization at this time of scarcity of financial resources, for aid effectiveness, or simply for better recognition of volunteers work upon which the sustainability of several development initiatives depends. Taking the measure of volunteering means looking beyond the numbers in order to effectively incorporate volunteerism into mainstream policies and programmes for peace and development, a challenge that evaluation should help to address. So far, the majority of studies on measuring the contribution of volunteerism do not include volunteers perspective. Volunteering involving organizations efforts to measuring the contribution of volunteerism usually focus on the management of volunteer assignments. Global studies that attempted to measure the impact of volunteerism such as the Gallup World Poll, the World Values Survey, the Johns Hopkins Comparative Non-profit Sector Project, and the CIVICUS Civil Society Index have led to very different findings because of their different measurement approaches and definitions of volunteerism. Surveys on volunteering by the Red Cross and Red Crescent Movement (IFRC) are a mix of quantitative and qualitative data they are periodical and only cover selected fields and developing countries were data is available. A few studies (Calvo, 2008; Haski-Leventhal, 2009; Cohen, 2009; ICNL, 2009; Handy and al., 2010) have looked beyond economic data to research into the nature and motivations of volunteers, their impact on beneficiaries, the role of religion, policies and legislations but they mainly rely on case studies. This paper presents a methodology developed by the United Nations Volunteers (UNV) to help its staff and development practitioners as well to get the evidence they need to demonstrate that volunteerism contributes to peace and development. It consists of a handbook on Assessing the Contribution of Volunteering to Development. A Participatory Methodology. The handbook, which was published in 2011, includes a set of tools for use in participatory workshops. This participatory methodology supports volunteer involving organizations in obtaining the answers to six basic questions, ranging from results achieved to stakeholder perception of volunteerism to lessons learned. The fundamental principle underlying the assessment approach in this h handbook is that it should be a bottom-up process, which draws on the experiences and perceptions of volunteers themselves, their partners and the intended beneficiaries of volunteering placements and programmes. The methodology does not necessary intend to produce an impact assessment, but rather to promote an analysis of results and contributions of volunteering to short and long-term development goals. It provides opportunities for volunteers and their stakeholders to engage in various ways. Keywords: Volunteerism; Measurement; Contribution; Development; Methodology;

Thursday, 4 October, 2012

9 : 3 0 1 1 : 0 0

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

90

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-18 Strand 2

Paper session

Integrating ethics in evaluation


S2-18
O 127

Equity-focused developmental evaluation using critical systems thinking


M. Reynolds 1
1

Open University, Communication and Systems, MiltonKeynes, United Kingdom

Thursday, 4 October, 2012

9 : 3 0 1 1 : 0 0

In the networked society, evaluative questions about access to resources who gets what? ought not to be seen in isolation from related questions of power who owns what? They also ought not to be seen in isolation from questions of knowledge and expertise who does what? Moreover these questions relate to important questions regarding legitimacy who gets affected by what some people get? Such questions are often more easily avoided in a normal evaluation for fear of the ethics and politics involved in addressing them. Such questions may also not be easy to grasp or work with in terms of an approach to evaluating an intervention. Systems thinking offers a complementary supplement towards supporting a more political and ethical use of existing evaluation tools. The value of systems thinking is not in providing yet another set of methodological tools for dealing with complex situations of change and uncertainty. Increasingly systems thinking is regarded as a type of interdisciplinary and transdisciplinary literacy for not only making sense of complex and conflictual situations but for constructing ways of improving situations. Moreover, systems thinking is a literacy founded upon ethical traditions of whats good (consequentialist ethic), whats right (deontological ethic), and what type of behaviour is required to enact goodness and rightfulness (virtue based ethic). So how might an equity-focused evaluation be supported by systems thinking? In this paper I examine pro-equity developmental evaluation using a general heuristic of systems thinking in practice (Reynolds, 2011). The heuristic framework comprises of three entities real world messes, stakeholders associated with such messes, and systems as constructs or tools for dealing with such messes. Three corresponding activities are associated with each entity understanding of interrelationships, engaging with multiple perspectives, and reflecting on the limitations of our bounded frameworks (Williams and Iman, 2007; Reynolds, 2008a; Williams, 2011). Critical Systems Heuristics (CSH) is introduced as a systems approach with tools particularly relevant for drawing out ethical and political issues. Drawing on a significantly updated version co-authored by Reynolds and the originator of CSH, Werner Ulrich (Ulrich and Reynolds, 2010) CSH tools are explored as part of a systems thinking in practice framework for evaluating complex situations from different stakeholder perspectives. The situation under evaluation (e.g. a purposeful activity like a report, programme or project etc.) is framed as a reference system in CSH by a toolbox comprising four sets of questions evaluating (1) built-in values, (2) power structures, (3) expert assumptions, and (4) the moral basis on which an intervention operates as considered from the perspective of both intended beneficiaries and victims. The chapter describes how CSH and the underpinning methodological process of systems thinking in practice associated with boundary critique makes a contribution to Michael Pattons ideas on developmental evaluation. As a guide to the use of the heuristic framework, reference will be made to a complex evaluand the 60 year Narmada Valley Development Programme in India. Keywords: Systems thinking; Equity; Developmental evaluation; Critical systems heuristics;

O 128

Sensitivity and Ethics of Conducting Research on Risky and Sexual Behaviour among Adolescents
P. Leah Wilfreda 1
1

University of Bohol, Graduate Studies and College of Education, Tagbilaran City, Philippines

This paper raises some methodological issues in conducting sensitive topics of evaluation among adolescents in specific contexts. Young peoples needs vary tremendously depending on their stage of life puberty, adolescence, and early adulthood and on the context of where they live (PRB, 2000). Youth undergo a transition to adulthood. Hence, several needs emerge (e.g. reproductive health needs) as a result of the physiological and psychological changes. In the midst of these changes and as programs are implemented to meet their needs, evaluating the program that address the needs of adolescents needs a closer look, more so on its methodological issues. Adolescence is a modern term meaning a period of life that starts at puberty and ends at the culturally determined entrance to adulthood (social maturity and economic independence). While adolescence is generally a healthy period of life, many young people are exposed to health risks associated with sexual activity, including exposure to STIs, unintended pregnancies, and complications from pregnancy and childbirth. As Erikson (1975) noted in his psychosocial theory of development, adolescence is a time when a teen focuses on the formation of identity and a coherent self-concept, as she faces the task of identity versus role confusion. This is a time when a teen tries to establish herself as an individual, capable of taking care of herself; no longer a child, yet still not an adult (Hurlock, 2005). Furman and Shaffer (2003) describe several developmental tasks faced by adolescents which include (a) identity development, (b) the transformation of family relationships, (c) the development of close relationships with peers, (d) the development of sexuality, and (e) scholastic achievement and career planning. These tasks involve not only the individual, but also the systems in which she exists (i.e. family, peer group, and school). No doubt, the accomplishment of one of these tasks impacts the others, in both positive and negative ways. This papers explores the main insights and issues raised by researches in behavioral and educational psychology about risky behaviors of the adolescents.and how to get the information through methods and tools that are non-threatening for adolescents. How do researchers, specifically evaluation researchers of programs for the youth that identify the prevailing risky behaviors (sexual and non-sexual) among the adolescents. How do researchers (evaluation researchers gain deeper understanding on the factors that influence risky behavior among adolescents. What methods are used and are they empowering to adolescents?

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

91

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

Improving young peoples health and well-being is a critical goal in and on itself, with long-term benefits to society as a whole. In particular, the decisions these young people make regarding their lives will make todays youth the critical cohort in determining the future of the world population for years to come. Hence, studies that not only enable planners and stakeholders to meet the changing and continuing needs of the adolescents, especially in relation to risky behaviors, but the tools used in conducting the study are empowering to the people being studied is imperative.

S2-18

O 129

Participatory monitoring and evaluation in complex communication for development programs: A critical view from Nepal
B. K. Koirala 1, J. Lennie 2, J. Tacchi 2
1 2

Equal Access Nepal, Monitoring and Evaluation section, Kathmandu, Nepal RMIT University, School of Media and Communication, Melbourne, Australia

Thursday, 4 October, 2012

9 : 3 0 1 1 : 0 0

This paper critically examines the strengths and limitations of participatory monitoring and evaluation (PM&E) methodology in assessing the impacts of development communication programs made by Equal Access Nepal (EAN). PM&E is used by EAN to more rigorously assess the impacts of its radio programs and to develop more useful and realistic indicators of social change. At the same time, it uses this methodology to undertake research that will improve its programs and outreach work. This methodology, which was based on Ethnographic Action Research (http://ear.findingavoice.org), was developed as part of the Assessing Communication for Social Change (AC4SC) project. This project found that PM&E is useful for assessing social change impacts, including more positive attitudes towards politics, greater community involvement in democratic processes, and development of life skills among youth. EAN created a community researcher (CR) network to provide continuous feedback that contributes to program improvement and development. This network provides valuable information about local community contexts and issues and qualitative data on program impacts. EAN is using the Most Significant Change (MSC) technique as a major methodology, which is triangulated with other data from mixed methods such as group discussions, interviews, observations, monitoring reports, and surveys. EAN has also used technologies such as SMS polls and text messages in its M&E work to obtain direct and rapid feedback from listeners. In addition, communicative ecology mapping has been used to understand the complexity of communication flows and access to information in local communities, which highlights inequalities among the people. An ongoing meta-evaluation of the AC4SC project highlighted some limitations of PM&E in the development context. A key issue was the conflict between donor requirements for evaluations based on accountability and the focus on improvement and learning in a PM&E approach. PM&E was often a time and resource-intensive process that produced complex challenges and issues. However, it was effective in encouraging an organizational transition to PM&E systems based on regular communication and feedback loops between M&E staff and other stakeholders. The limited education and variable motivation of the CRs also affected data quality. Although increasing access to new communication technologies has enabled listeners to provide rapid feedback on EANs programs, high levels of poverty and lack of access in rural areas has prevented more widespread use of these technologies. Lack of effective collaboration and communication with a wide range of stakeholders is a further barrier to achieving shared social change goals. Our research demonstrates the value of EANs various approaches to establishing PM&E as a system that can share program impacts and feedback for ongoing program improvement and learning. As Estrella and Gaventa (1998) suggest, PM&E is not just a matter of using participatory techniques within a conventional M&E setting, it is about radically rethinking who initiates and undertakes the process and who learns or benefits from the findings. Seeing EANs implementation of PM&E as part of changing its organizational culture has encouraged continuous learning and improvement, greater appreciation of the value of evaluation, and greater ownership and utilization of its M&E results. Keywords: Participatory monitoring and evaluation; Impact assessment; Development communication programs; Organizational learning; Nepal;

O 130

Using evaluation to improve participation in adult education


G. Ellis Ruano 1
1

Gellis Communications, Brussels, Belgium

Non vocational adult learning does not have a strong profile at European or national level in terms of policy prioritisation and resources allocated to it. However, adult learning can help redress the challenges faced by the European Union, such as a falling GDP, ageing populations, and an increase in migration/immigration. In the context of decreasing adult learning participation participation in adult learning has continued to fall, from 9.8 % of the 2564 year-old population in 2005 to only 9.1 % in 2010 the European Commissions Directorate-General for Education and Culture (DG EAC) commissioned Gellis to carry out an in-depth study on the effectiveness of strategies to raise awareness of and motivation to participate in adult learning. A challenge faced by DG EAC was the fragmentation of initiatives across Member States and different areas of education. It was realised that DG EAC would be most effective by strengthening its position as a key driver of adult learning by acting as a thought leader in providing solutions, encouraging a trickle down effect of information by stakeholders, and providing stakeholders with tools and means to be more effective in their outreach. The objectives of the study were therefore to: Explore how to make adult learning more popular and more accessible for identified target groups, including potential adult learners, policy makers, education providers and social partners; Analyse existing initiatives already carried out in terms of awareness raising, primarily at Member State level; and Provide recommendations for future activities and propose which existing strategies should be used.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

92

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

Research conducted included: 1. Situational analysis across all European Union Member States, including research on target groups and on existing awareness raising activities; 2. In-depth interviews with DG EAC staff and key external stakeholders; 3. Online survey disseminated to more than 1500 stakeholders;

S2-18

4. Segmentation and classification of stakeholders; and 5. Best practice analysis of examples of existing communication and awareness raising activities. The methodology for the best practice analysis was as follows: 1. Identification of existing awareness raising activities in the field; 2. Development of criteria to rank each activity; 3. Ranking and finalisation of top 15 activities; 4. In-depth interviews with the team behind the best practice; and

Thursday, 4 October, 2012

9 : 3 0 1 1 : 0 0

5. Development of in depth case studies. This best practice analysis was the basis of the development of a European Guide Strategies for improving participation in and awareness of adult learning, which was the final deliverable of the project. This guide detailed: 1. the role of stakeholders in the field of adult education; 2. in depth case studies on selected activities in the field of awareness raising of adult learning; and 3. a list of activities that could be built upon and executed by different stakeholders in the field of adult learning. The study was used as the main basis for a DG EAC conference discussion in February 2012, which developed concrete plans for implementing a new European Union resolution at European and national level to emphasise an Agenda for Adult Learning Keywords: Communications; Adult education; European Commission; Europe wide; Strategy guide;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

93

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S3-26 Strand 3

Panel

Credentialing in Canada: two years later


S3-26
O 131

Credentialing in Canada: two years later


M. McGuire 1, K. Kuji-Shikatani 2
1 2

Cathexis Consulting, Toronto ON, Canada Ministry of Education, Toronto ON, Canada

Thursday, 4 October, 2012

9 : 3 0 1 1 : 0 0

The Canadian Evaluation Society introduced credentialing to the world of evaluation, launching its Professional Designation Program in May 2010. We are now two years into the process, having faced a number of challenges as well as a few exciting moments. Martha McGuire, a credentialed evaluator and current president of the Canadian Evaluation Society and Keiko Kuji-Shikatani, a credentialed evaluator and vice-president of the Professional Designation Program, will be the presenters. Both have experience in the classroom as well as having presented on numerous panels and at workshops. After giving a very brief historyof the credentialing program, we propose to discuss the following areas: 1. Benefits to the Canadian Evaluation Society (e.g. membership, profile, emphasis on professional development) 2. Challenges faced and how they were addressed (technical, process, implications for professional development) 3. Response from evaluators (numbers, value to them) 4. Response from those hiring evaluators (wanting credentialed evaluators) 5. Implications for professional development We suggest a round table discussion that will allow attendees an opportunity to pose questions and engage in discussion. The format will be a brief presentation followed by facilitated discussion. We will prepare a slide deck that will be provided as a hand-out as well as used to make the presentation more dynamic. We will also make ourselved available throughout the conference to anyone who wants a more personal discussion regarding credentialing. This could be individuals considering the Canadian credentialing as well as other evaluation societies who are considering a credentialing program. Keywords: Credentialing; Professional designation;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

94

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-31 Strand 2

Panel

Performance management and evaluation: love at S2-31 first sight or marriage of (in)convenience?
O 132

Performance management and evaluation. Love at first sight or marriage of (in)convenience?


Thursday, 4 October, 2012
S. Bohni Nielsen 1, P. de Lancer Julnes 2, H. P. Hatry 3, R. Lahey 4, A. Johnsen 5

9 : 3 0 1 1 : 0 0

1 2

Ramboll Management Consulting, Copenhagen, Denmark University of Baltimore, School of Public & International Affairs, Baltimore, USA 3 Urban Institute, Washington DC, USA 4 REL Solutions Inc, Ottawa, Canada 5 Oslo University College, Faculty of Social Science, Oslo, Norway

This panel session will address an ongoing discussion in the field of evaluation: The fact that complementarity between evaluation and performance management makes theoretical sense, but has failed to inform practice to a significant extent. It is worth noting that some observers within the evaluation community have even talked about estrangement between practitioners within evaluation and performance management (Blalock, 1999). Indeed, some evaluators have shown considerable skepticism towards performance measurement and management altogether (e.g. Davies, 1999; Greene, 1999; Perrin, 1998; van Thiel & Leeuw, 2002). Yet, it should be said, there have been others who have argued that evaluators should and must engage in both evaluation and performance management practices (Bohni Nielsen & Ejler, 2008; Hunter 2006, Newcomer & Schreirer, 2001; Mayne & Rist, 2006). And over the past few years there has been a growing acceptance that the two forms over knowledge production are complementary (de Lancer Julnes, 2006; Kusek & Rist, 2004; Office of the Auditor General, 2000; Stame, 2006). Nonetheless, so far little attention has been paid to specify the ways in which complementarity could be put into practice. Rist (2006) proposed three forms of complementarity: informational, sequential, and organizational complementarity. Bohni Nielsen & Ejler (2008) proposed that there is also methodological complementarity. Newcomer & Schreirer (2001) have also proposed various evaluation tools and processes which may usefully be adopted in a performance management framework. These conceptualizations suggest ways in which to look at how organizations do, or do not, use monitoring and evaluation data in a coherent system. This panel, comprised of thought leaders from Europe and North America will explore this complementarity further from both a conceptual point of view and make ample use of empirical examples from Europe, Canada, and the US to make their point. Panelists Steffen Bohni Nielsen, Patria de Lancer Julnes, Harry Hatry; Age Johnsen and Robert Lahey hold extraordinary practical experience with performance management and evaluation and have all published widely in peer-reviewed journals and books. They will all be contributors to an upcoming issue of New Direction for Evaluation (scheduled for publication spring 2013, Steffen Bohni Nielsen as editor) on the same topic. Steffen Bohni Nielsen will chair the session and outline forms of complementarity Patria de Lancer Julnes will outline complementarities drawing from American cases from federal, state, regional and local levels. Harry Hatry will outline complementarities drawing from American cases from federal, state, regional and local levels. Robert Lahey will outline complementarities drawing from Canadian cases from federal level. Age Johnsen will outline complementarities drawing from Nowegian cases from state and local levels. Keywords: Performance management; Results-based management; Performance measurement; Performance monitoring;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

95

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S5-16 Strand 5

Paper session

The role of evaluation in civil society II


S5-16
O 133

Fear or safety? The short and long term impacts of community policing on perceptions of crime
I. Ramage 1, K. Ramage 1, K. Nilsen 1, P. A. Lao 1, J. P. Nicewinter 1
1

Domrei Research and Consulting, Phnom Penh, Cambodia

Thursday, 4 October, 2012

9 : 3 0 1 1 : 0 0

Prevention and policing of crime at the local level has become a cornerstone of criminal justice policies in many countries. Working closely with communities affected by crime, community policing is rooted in the belief that local problems can best be solved by local solutions. Perceived benefits of community policing include a drop in local crime, decreased fear of crime, improved relations between communities and the police, and increased local autonomy and community participation. Whereas findings from previous studies suggest the effect of community policing on crime is mixed, the effect of community policing on the perception of crime is positive. These are important findings as fear of crime can be a considerable problem in itself. It limits activity, keeps residents in their homes, and contributes to empty streets, which can in turn lead to more crime. Since 2007, the Royal Government of Cambodia with support from AusAID has implemented a community policing initiative in 3 provinces to improve safety by focusing on decentralization and synchronization of government authorities, and encouraging civil society dialogue and engagement to improve policing in the community. To measure the effect of this community policing project on perceptions and fear of crime, cross-sectional surveys with an impact evaluation design were conducted every year, between 2007 and 2011. Each survey included 1200 randomly selected households from areas subjected to interventions on community policing (treatment) and areas with no interventions (control). Limited government data on reported crime was also collected to compare trends. We analyzed results on perception of crime from 20072009 and 20072011 to assess if perceptions about crime changed between the short and the long term. We find that community policing had a positive effect on the perception of crime in the short term (20072009) and little effect in the longer term (20072011). We therefore propose that community policing may have an immediate positive effect on perceptions about crime shortly after it has been introduced, perhaps due to a high level of sensitization and enthusiasm for the new initiative in the community. But that enthusiasm wanes over the long term, perhaps because expected results fail to materialize (compare actual crime rates across treatment and control), leaving only a heightened sensitization to crime. Recognizing the limitations in the study design, we propose that initiatives such as community policing need to be measured in the short and long term to fully assess impact.

O 135

Statutory Reporting for NGOs: Seeing a Gap, Setting a Standard, Saving a Sector
E. Goetsch 1
1

Centre for Social Impact, Johannesburg, Republic of South Africa

Society relies on auditors to track organisations that make promises to donors when getting their money and owe delivery to communities when spending the money. A neglected but promising area of M&E in each country is the annual return required by legislation of NGOs. Handled correctly, setting and owning a new standard that showcases the social value of M&E offers the National Association more income, higher standing and wider reach.This paper describes the new standard and its adoption in South Africa and Africa. NGO transparency and accountability in South Africa is, like elsewhere, a mix of private and public reporting. Private reports are in-depth but ad hoc and arbitrary in their timing and approach. They are owned by donors, who focus on their projects. Public reports consist of the annual return required by legislation, with a defined format focusing on the organisations finances and a narrative section for everything else. Between them the public should see the truth, the whole truth and nothing but the truth. In practice, donors have little to make their next funding decisions with. The previous format asked too few questions to indicate which NGOs are successful and honest. Few donors read the reports anyway. Though SA has some 100 000 registered and 70 000 active NGOs, there is little money in the annual return market. Practitioners follow the money and donors fund ad hoc project audits. This mix of regular but unfunded and superficial public reporting and deeper but narrow and proprietary private reporting has unintended consequences: practitioners experience income uncertainty in a winners-take-all-environment, information in the public domain is scrappy and self-serving, donors lack the information to choose upfront between projects, NGOs can use M&E for marketing only, avoid M&E where possible and manipulate the terms etc. Income insecurity tempts practitioners into a fawning dependence on clients at the risk of their independence and integrity, into putting competition for clients before cooperation over quality and seeking public office to get rather than give. The impact on NPOs is negative: honesty is not rewarded, non-delivery is not punished and disillusioned donors exit the sector. A revised standard for returns that asks the searching questions donors universally ask corrects this. The paper suggests a format that gives an overall score to the NGO and quantifies its objectives, capacity, sustainability, impact and productivity. It also tracks the entire cash flow. The rationale and formulae are opened for debate.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

96

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

The paper shows the 10 steps for national adoption: the legislation establishing the national association as the certification and training authority, the code of conduct for donors and NGOs and the spreadsheets for auditors. It also offers the Audit Levy that solves the sustainability problem of the national association. It outlines how social networking as a technology and community can supply NGOs (in Africa) with online mentors, trainers and advisers (in Europe) and help them satisfy the audit standard and improve their sustainability and impact. The proje ct is underwa y i n South Afric a with S AM E A a nd A fri ca wi th A frE A .

S5-16

Keywords: Statutory reporting; Standard; Reporting technology; National association; Industry promotion;

O 136

Understanding the influence of independent civil society monitoring at the district level: a case study of Ghana
Thursday, 4 October, 2012
M. Gildemyn 1

9 : 3 0 1 1 : 0 0

Institute of Development Policy and Management, Antwerp, Belgium

In the past decade an increasing number of civil society organizations (CSOs) have engaged in independent monitoring and evaluation (M&E) of government programs and policies. In developing countries this type of independent M&E initiatives mainly emerged within the context of the aid reform agenda, but more recently, these initiatives are proliferating under the banner of social accountability initiatives or transparency and accountability initiatives. Most CSOs rely on a range of monitoring tools, such as community score cards and public expenditure tracking surveys among others, to monitor and assess certain programs and policies and to hold government officials accountable. In addition, CSOs are complementing the monitoring with advocacy and communication strategies to increase the impact of their work. Some large-scale studies (for example McGee & Gaventa, 2010) have attempted to evaluate the outcomes and impacts of such initiatives and to identify contributing factors. However, little is known about the underlying mechanisms through which the initiatives achieve their desired outcomes. The current paper aims to shed light on some of these underlying mechanisms by focusing on the work of the Ghanaian non-governmental organization SEND-Ghana. For almost a decade, SEND-Ghana has been monitoring several pro-poor government programs and policies such as the National Health Insurance, which is taken as an example in the current study. To carry out the monitoring, SEND-Ghana relies on a decentralized network of CSOs and citizen monitoring committees in each district, constituting together the participatory monitoring and evaluation (PM&E) network. This network structure contributes in a crucial way to the credibility and legitimacy of SEND-Ghana as an independent monitor. The research uses a case-study design and combines multiple data sources (interviews, participant observation and documents) that were gathered during recent fieldwork in Ghana. For the analysis of the data the previously developed theoretical framework (Gildemyn, 2011) is used that draws on Mark & Henrys (2004) theory of evaluation use and influence. The framework enables to uncover some of the underlying influence mechanisms that explain the outcomes of SEND-Ghanas monitoring of the National Health Insurance at the district level. These mechanisms are not only triggered by the M&E findings, but also by the network structure through which SEND-Ghana is operating. In addition, different dialogue spaces that form an integral part of SEND-Ghanas monitoring activities are initiating and further enhancing some of the underlying mechanisms that will be discussed in the paper. Keywords: Civil Society Organizations; Monitoring and evaluation use and influence; Ghana;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

97

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-16 Strand 2

Paper session

Innovative methodologies in development S2-16 evaluation


O 137

Use of case-studies in international development evaluation


P. Julie 1, E. Sirtori 1, V. Silvia 1
1

CSIL Centre for Industrial Studies, Development and Evaluation Unit, Milano, Italy

Thursday, 4 October, 2012

9 : 3 0 1 1 : 0 0

Ex-post evaluation of investment projects is attempted by international and national organisations in different ways. The World Bank regularly collects data and indicators about the performance of its portfolio of completed projects, and publishes an annual independent evaluation report. In the World Bank approach, evaluators give scores to some dimensions of performance, and discuss regularities by cross-checking some project characteristics. Some econometrics have also been tried, based on standard CBA indicators. Some time ago, the World Bank itself had the opportunity to learn from a completely different approach, presented by Albert Hirschman in his influential book Development Projects Observed. In this book, Hirschman studied eleven case studies of World Bank projects. Rather than verifying whether history would match forecast, he purposely looked for interesting deviations from expectations. An example of Hirschmans approach is the study of management and institutions behavioural response to unexpected shocks, in order to determine the degree of resilience of a considered project. Recently the European Commission DG Regional Policies has launched an ex-post evaluation that aims at learning lessons from in-depth case studies of a small number of major projects approved in the 19941999 programming period. This evaluation study was an opportunity to test an innovative methodology, that combines a quantitative assessment based on a new interpretation of ex-post Cost Benefit Analysis, with a qualitative evaluation of some response mechanisms to shocks along different impact dimensions. These impact dimensions include a direct growth effect, shifts in the economic endogenous dynamics related for example to increased human capital, changes in institutional quality, social and territorial cohesion, effects on environmental sustainability and, lastly, social happiness. Example of the factors that may explain project performance include the project appropriateness to the context, the forecasting capacity of investors and promoters, the governance structure, the project design and behavioural response to unexpected events. Each of the case study is structured as a project history, where the core of the exercise is an attempt to assess how the project is able now to respond to future challenges, based on how it has evolved in the last twenty years. The peculiarity of this evaluation approach, which draws on Hirschmans work, relies on the possibility to disentangle the mechanics of development, by adopting a subtle evolutionary perspective that a structured case history can offer. Given the variety of the context addressed, it is expected that the ten case studies selected in the transport and environmental sectors, in Greece, Italy, Ireland, Spain and Portugal will reveal how robust this new approach is. Keywords: Case studies; Evaluation; Development;

O 138

Evaluating influencing efforts in the International Development sector: challenges and opportunities for bi-lateral agencies
B. Dillon 1
1

DFID, Evaluation, London, United Kingdom

For the international development sector, evaluating the dimension of influence in diverse development efforts such as campaigning, diplomacy, and demonstration, is a relatively new focus. The recent impetus for evaluation in this area stems from an ever more competitive global environment for development co-operation, combined with prolonged fiscal constraint in the West and consequent greater public demand for transparency, value for money and results. This has re-focused donor attention on the importance of influence as a key instrument for delivering development outcomes. Furthermore, advances in communications technology now provide readily available, affordable tools which are revolutionizing capability in measurement. These are opening up new possibilities for the collection of objectively verifiable data on the process and progress of influencing efforts. These changes are increasing pressure on the international development sector to produce methodologically robust, credible evaluations. In turn, these will build an evidence base to demonstrate which influencing efforts have been successful, why, where and in what way. To-date the sector has responded with research, and guidance for practitioners, however this remains a challenging area for international development operations. This paper argues that whilst there are technical challenges, the evaluation of influencing efforts offers opportunities for improving the impact of development interventions, and is increasingly becoming a necessary part of the development effectiveness toolkit. The development sector should strengthen its capacity in this area, drawing on learning from sectors with longer experience in this field of measurement. The methodology includes analysis of the place of evaluating influence in the current global context of development co-operation, and brief review of theories underpinning this type of evaluation and learning from a range of sectors. It also includes analysis of key technical features which distinguish the evaluation of influencing efforts from other sorts of evaluation, and the particular challenges these pose for a bi-lateral aid agency to embrace this type of evaluation. This is demonstrated with reference to DFID. The Paper draws on developmental evaluation theory (Patton 2011), complexity theory (Snowdon and Boone 2007) public relations (Grunig 2001 Sheldrake 2011) practitioner toolkits (eg UNICEF 2011, BOND 2011) and research on policy influence eg ODI RAPID, 3ie.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

98

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

This topic responds directly to the Conference theme, as the networked society has significantly changed the way influence is transmitted, and advances in communications technology present both new challenges and new solutions to the monitoring and evaluation of influencing efforts. It is a live and growing topic of interest to development practitioners, academics and evaluators. Whilst the evaluation of influence is often studied in the world of public relations and marketing, this Paper contributes to building knowledge about evaluating influence from a relatively new perspective the international development sector.

S2-16

Keywords: Evaluation; International development; Influencing efforts; Challenges; Opportunities;

O 139

Challenges in evaluating budget support: A review of existing studies


G. Dijkstra 1, A. De Kemp 2
1 2

Erasmus University Rotterdam, Public Administration, Rotterdam, Netherlands Ministry of Foreign Affairs, Policy and Operations Evaluation Department, The Hague, Netherlands

Thursday, 4 October, 2012

9 : 3 0 1 1 : 0 0

Rigorous impact evaluations have increasingly become the standard for assessing aid and development effectiveness. RCTs and quasi-experimental designs are plausible evaluation methods for aid projects or specific development interventions. At the same time, however, aid practices evolved in the direction of the joint provision of non-earmarked funds to governments with the aim of promoting development at the national level: budget support. Against this background, the aims of the paper are twofold. First, to outline the challenges that evaluating budget support entails as compared to evaluations of other aid modalities, and second to review and assess the extent to which existing evaluations and studies of budget support have addressed these challenges. Evaluating budget support entails at least four challenges. A first problem is to define the appropriate counterfactual: is it a situation without budget support or one in which donors provide other modalities? Second, and given that the aims of budget support are defined at country level, the need for a rigorous counterfactual is difficult to meet; cross-country quantitative analysis could help but suffers from several practical problems: lack of reliable data, smallness of the intervention relative to other factors that influence the outcome variable(s), small number and extensive heterogeneity of the group of countries that received budget support. Third, the intervention theory of budget support is not unambiguous. While evaluations of earlier modalities of programme aid took into account that programme aid has two inputs, money and the policy dialogue, budget support not only has these same two inputs but also, and to an increasing extent, two objectives: reducing poverty and improving governance. Fourth, budget support is usually provided as a joint effort of several donors while these donors at the same time often maintain their own procedures and priorities. It will be shown that although the first of these challenges is not entirely new because it was already an issue in evaluations of other forms of programme aid, the second and third of these challenges are new and are clearly related to specific aspects of the network society. The main part of the paper assesses existing studies and evaluations of budget support on how they have dealt with these challenges: i) which counterfactual did they apply? ii) did they apply a rigorous counterfactual for assessing impact? iii) did they take into account that budget support not only has two inputs but also two objectives and did they assess possible trade-offs between these objectives? iv) did they take into account the effects of the harmonization or lack of harmonization of donors on outcomes and impact of budget support? The studies reviewed include both country case studies and cross-country quantitative studies. The paper concludes that although a lot can be learned from these studies, most studies fall short on at least one of these criteria, and some on more than one. The last part of the paper discusses alternative methodologies for evaluating budget support, suggesting that a combination of quantitative and qualitative methods is most promising. Keywords: Evaluation; Budget support; Methodology;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

99

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-22 Strand 2

Paper session

New or improved evaluation approaches I


S2-22
O 140

Meta analysis within the context of development evaluation practices


O. Varela 1
1

World Vision International Global Center, Global Knowledge Management, Panama, Panama Republic of

Thursday, 4 October, 2012

9 : 3 0 1 1 : 0 0

Systematic information gathering, as part of evaluation initiatives assessing long term development programmes or humanitarian interventions in multiple less developed countries, constitutes a generally acknowledged challenge. These initiatives are further challenged when international development and humanitarian organizations, try to develop global impact measurement system for measuring contributions to highly desirable, yet complex and culturally diverse societal conditions such as childrens well-being. A high degree of standardization in systems and processes is commonly advised to function as well as to develop global impact measurement of child well-being related interventions. The Millennium Declaration and its derived Millennium Development Goals created new available resources and favorable conditions for evaluation research, methods, and practices. During the last decade, new opportunities emerged to take advantage of a promising new information environment and evolving global social networks for helping assess long term development programmes and humanitarian interventions. As a result, important progress has been accomplished on the development of indicators to measure child well-being that can be objectively verified. Long term data collection by United Nations Agencies, The World Bank, and many others, have been able to develop comprehensive data sets and general agreements on how key well-being outcomes could be measured. Furthermore, international NGOs such as World Vision have implemented processes for flexible indicator selection within a broad global framework of child well-being using a compendium of indicators organized by child well-being outcomes. In a multi-country and multi-layered organization such as World Vision, developing a standardized process is complex, yet desirable, in order to report effectively on contribution to the well-being of children at the global level. The compendium is being used by diverse programmes across World Visions partnership to support the development of an evidence base for measuring the organizations contribution to the well-being of children. The paper elaborates on how these developments have created favorable conditions for meta-analysis to power-up evaluation methods and practices. The paper provides insights on how the questions addressed by meta-analysis, within the context of development evaluation practices, have been grown more complex, and its techniques evolved, offering potential support to global impact measurement systems of child well-being through a reasonable degree of standardization and flexibility. The author argues that because of the wide variation in focus and scope of program evaluations within World Vision, Meta-analysis Summaries will better identify the prevalence of certain effects (such as child well-being targets) and the strengths of relationships among those effects and certain explanatory variables (such as DME quality). Keywords: Meta-analysis; Evaluation; Methods; Techniques;

O 141

Workers experience of presence as a tool for evaluating: a case example of the organization of child welfare
V. Koskela 1
1

Lappeenranta University of Technology, Lahti School of Innovation, Lahti, Finland

One important part of interaction in the organization is workers personal ability to be present with a workmate, with a client or with at her or his own work as well. Unfortunately presence is not always present at all. There is a lack of ability of being present and concentrating on for the moment because of the stress of every days life. People have to hurry up to already-be in the next moment, next place, next project and next holiday. Same time when the society emerges workers to product effectiveness and ability of competence and change it surrounds them in a busy mind which makes people lose their way to be present in the moment. This is a paradox which needs to be solved in a way or another. This article extends debates of how consciousness of presence is encouraged for evaluation of ones own work. The study focuses on to the experiences of presence in one public sector organization in Finland. How do workers experience their individual experiences of presence at their work? How do the experiences vary each others and what do they like? Have these experience of presence (EP) something to do with innovativeness, shift and development of the organization? And, first of all, is it possible to use these experiences as ones key tools for the evaluation of her/his own work? This paper is a case description about a one-year project of the presenceworkshops in the child welfare organization in Finland; how they were made, what happened there and what was learnt about the experiences. The workers individual capacity of being present in the moment (concentrating only on the present moment and its multidimensional dynamics) has not researched much or even at all. For some reason or another it hasnt interested much how workers observe and understand their capability to be present in their work during the day. The experiences of the presence belong to the field of tacit knowledge, which is because of its invisible character quite challenging field to study. But Otto Scharmers Theory U (Theory U Leading from the Future as It Emerges, The Social Technology of Presencing, 2009) is one of the previous organizational management studies and publication which has pointed out the meaning of the experience of presence as a learning factor. Thats why Scharmers concept of the presence is one of the theoretical main paths of this article. The methodological field of the paper based on the phenomenological case study. Workers of the case organization describe by different creative methods what kind their state of presence is and how they could use their experiences as important tools for observing, changing, developing and evaluating their work. Keywords: Experience of presence; Innovativeness; Creative method; Tacit knowledge;
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

100

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 142

Improving Evaluation With Old Technology: Evidence for the Validity of The Rapid Assessment of Teacher Effectiveness (RATE)
J. Gargani 1, M. Strong 2
1

S2-22

Gargani + Company, Berkeley, USA UC Santa Cruz, Santa Cruz, USA

Teacher evaluation is an important and controversial subject in the US there is a growing sense it is needed, yet little agreement about how to do it. Recent large-scale efforts have used technology to improve the evaluations of teachers. For example, the Gates Foundation invested $45 million in the Measures of Effective Teaching (MET) Study to validate existing observational and survey measures of teacher effectiveness. They employed new technologies, such as specialized 360 degree cameras that push video of classroom instruction directly onto the web, and new web-based applications for training, calibrating, and monitoring raters who scored the videos. High-tech solutions are exciting but do they improve teacher evaluation and, through its use, classroom instruction? We report on our ongoing work to develop and validate a new, simple observational measure, the Rapid Assessment of Teacher Effectiveness (RATE). It is designed to predict the extent to which elementary school teachers will promote math achievement in their current classes. Educators can employ RATE early in the school year and use the resulting scores to provide support and allocate resources to the teachers who need it the most. RATE depends on technology, but much of it is old, inexpensive, and undeniably unsexy. This is an often overlooked design strategy one that we believe will help RATE be effective. In our presentation, we discuss the need for RATE, the manner in which it has been developed, the role technology plays, new empirical findings of RATEs effectiveness, cross-cultural studies that corroborate our findings, and how our results compare with those of the MET study. Some of the evidence we present comes from a recent article by the presenters published in the Journal of Teacher Education (2011, Vol. 62, No. 4), and some comes from newly completed experiments. Keywords: Education; Teacher Evaluation; MET Study; The RATE Project; Technology;

Thursday, 4 October, 2012

9 : 3 0 1 1 : 0 0

O 143

Developmental evaluation in Education Mission Impossible?


A. Borek 1, T. Kasprzak 2
1 2

Jagiellonian University, Krakw, Poland Jagiellonian University and Educational Research Institute, Krakw Warsaw, Poland

Quality research is a challange for every education system. Evaluation in many countries is used to this end. However, can evaluation not only measure, but also strengthen the exisiting quality or create new quality? We would like to focus on the presentation of the process and results of internal and external evaluation implementation in schools and institution in the whole Poland, in the framework of the pedagogical supervision reform. This process has a legal basis. The reform aims at creating a supervision system which genuinely supports schools development and makes running educational policy in Poland easier. This project carried out since 2009 and until 2015, 800 evaluators, 23 600 principals of schools and other educational institutions, and 3000 teachers will be prepared for the evaluation process. Considering the scale of the report, its systemic nature, the use of standardized research approach in the whole country, especially introduction of Internet platform, through which external evaluation is carried out, it is possible to draw conclusions valid for the whole country (now the platform contains responses from 100 000 students, 60 000 parents, 50 000 teachers). Helen Simons stressed that evaluation is an invitation to development. Evaluation in Polish system of education was designed as developmental evaluation. From our perspective (we are both co-authors of the system of the external evaluation and we prepare teachers and principals of schools for internal evaluation) it is important to what degree evaluation meets the assumed goal, which is development of individual schools and the whole system. The development should proceed through taking decisions based on data, but also through strengthening democracy and empowerment of schools, teachers, students, parents and other members of school society. On the level of evaluation guidelines, democracy and empowerment are carried out through, among others, selection of people taking part in the research (all the actors of school life), procedures, tools (which are transparent), preparation of evaluators and schools themselves for evaluation, discussions on the results of evaluation and availability of evaluation reports (on the webpage of the Project which is publically accessible). After 3 years of experience in systemic introduction of evaluation, three questions arise: Who accepts invitation to development, who does not and why? To what extent does evaluation really strengthen democracy in Polish schools? How does it affect processes of empowerment of school society (students, parents, teachers)? These questions are important, because there is a risk that on nationwide scale, the system of evaluation will be concentrated on thetrical performances ore passive adaptation, reproduction of certain patterns of behaviour, irreflective adjustments of actors, assimilation of terms which are meaningless to users rather than true development. These considerations will be illustrated by the results of the first three thousand and four hundred evaluations carried out in Polish schools and institutions, by the results of meta-evaluation, and by the results of ex-post evaluation. Agnieszka Borek and Tomasz Kasprzak sociologists, evaluators, experts and coaches in the programme improving pedagogical supervision in Poland. Authors and co-authors of many evaluation projects in education. Keywords: External and internal evaluation; Development; Empowerment;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

101

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-20 Strand 2

Paper session

Meta evaluation
S2-20
O 144

A methodological framework for Meta-evaluation of evaluations of local climate change adaptation initiatives in Senegal
M. Lomena-Gelis 1
1

Polytechnic University of Catalonia, UNESCO Chair of Sustainability, Dakar-Barcelona, Senegal Lomgelis2@gmail.com; thesemetaevaluationsn@gmail.com

Thursday, 4 October, 2012

9 : 3 0 1 1 : 0 0

This article is embedded in the authors PhD research which will contribute to the theory and practice of evaluation in Senegal through the Meta-evaluation of evaluations of local climate change adaptation initiatives. Building on an earlier analysis of the evaluation practice in Senegal, presented at the 6th AfrEA Conference in January 2012, this paper establishes the methodological framework to analyze the conception, process, results and the utilization of this type of evaluations and presents a preliminary version of a tailored Meta-Evaluation checklist. Meta-evaluation (hereafter MEv) is commonly defined as the evaluation of evaluations. Its focus is how evaluations are done, not just their results or findings. More than forty years after Michael Scriven coined the term, there are few investigations which have evaluations as their main object of study, and there is still confusion with terms like meta-analysis, synthesis of evaluation results and systematic review. Meta-evaluation can be applied to individual evaluations, to a set of them, and even to the whole evaluation system in certain circumstances. MEv has been extensively used to foster the improvement of the quality of individual evaluations, frequently focusing on their methodological and robustness of the evidence. This paper explores another use of MEv: the MEv of a set of evaluations which can guide the management of the function and practice of evaluation within an institution or in a substantive policy sector. The article to be presented at the 10th EES Conference outlines the adaptation of the MEv methodology to real evaluations conducted over the past ten years in order to explore the evaluation practice of the local climate change adaptation sector in Senegal. This will contribute to clarify the evaluation function in this policy sector. The paper starts by introducing the concept and describing the methodology used to elaborate the MEv checklist proposed for this research. Afterwards, some frequently misunderstood concepts are presented in order to better distinguish Meta-evaluation. Different types of MEv are explained along with some practicalities of the recommended procedures for MEv, emphasizing the type chosen for the research: summative, ex-post external MEv of the conception, process, results and the utilization of evaluations. In order to craft a MEv checklist for evaluations of local climate change adaptation initiatives in Senegal, the article explores the grey literature on MEv in the field of international aid development and the standards for MEv or for evaluation quality assessment proposed in key academic articles. More than 20 meta-evaluative exercises of development aid covering the past ten years are analyzed, capturing their objectives, standards used and hypotheses. The standards for MEv commonly recommended by the literature and the major evaluation associations are then summarized. Finally, these two flows of information are used to tailor a MEv checklist, bearing in mind the epistemological perspective of MEv endorsed by the article and the context of the Senegalese evaluation system and practice explored in the authors earlier article. A preliminary version of the checklist, including sources of information and guiding Meta-evaluation questions, is proposed as a conclusion. Keywords: Meta-evaluation; Local climate change adaptation evaluation; Evaluation practice; Senegal;

O 145

Evaluation of international and non-governmental organisations communication activities: A 15 year systematic review
G. ONeil 1
1

Owl RE, Commungny, Switzerland

This purpose of this paper is to understand how intergovernmental organisations and international non-governmental organisations have evaluated their communication activities and to what extent they have adhered to principles of evaluation methodology over a 15 year period (19952010). 46 evaluation reports and nine guidelines from 26 organisations were coded on type of evaluation design and conformity with six methodology principles. Most evaluations were compliant with principle 1 (defining communication objectives), principle 2 (combining evaluation methods), principle 4 (focusing on outcomes over outputs) and principle 5 (evaluating for continued improvement). Compliance was least with principle 3 (using a rigorous design) and principle 6 (linking to organisational goals). Further, the review found that evaluation has not been widespread, undertaken rigorously or integrated as part of communication activities. Implications of these findings for evaluation design and process, design of communication activities and institutionalisation of evaluation are discussed. Keywords: Communication; Campaigns; Intergovernmental organisations; Non-governmental organisations; Evaluation methodology;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

102

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 146

The evaluation and meta-evaluation design of the Presidio di Qualita di Ateneo (PARQ)
A. Nuzzaci 1
1

S2-20

University of LAquila, Antonella Nuzzaci has been resposible for the Presidio di Ateneo per la Qualita (PARQ) at the University of Valle dAosta. She has moved to University of LAquila since December 1 2011., LAquila, Italy

The purpose of this paper is to describe the process of quality analysis made at the time of institution of the Presidio di Ateneo per la Qualita (PARQ), a structure for the internal quality control (required by Italian law), established at the University of Valle dAosta in April 2008 and chaired by a teacher appointed by the Academic Senate. The contribution shows how the Presidio has developed a coherent plan for the evaluation of teaching performance and student services, and a process of annual review of the rules of conductiNG and promotiNG activities and validation of assessment tools. The PARQ has the main function to assess the quality and efficiency of teaching, developing initiatives and actions to promote an increasing convergence of the results and students learning time in line with European standards. To do this, it also aims to promote the University of Valle dAosta better understanding of the goals and guidelines set by the Bologna Process, which is reflected in the revision of the laws required by the Ministerial Decree 270/04 and its implementing decrees, and in that sense undertake actions for monitoring and evaluatiNG THE educational planning procedures and timing of collection and ARRANGE the design and calibration of instruments used. The unequal distribution of the complexity of the Faculty in creating an acceptable performance evaluation of teaching and service has meant that we drew up a draft evaluation aimed at identifying significant descriptors and indicators that could account for these elements of complexity. This process is still ongoing. The evaluation committee has reviewed the documents produced in different BODIES at various levels, including those FOR the evaluation, that allowed THE CREATION OF a first evaluation model FOR WHICH GIVING account OF this contribution. This required the assumption of a model of quality assurance, which entailed the selection of dimensions and specific criteria, aimed at continuous improvement of teaching practices at different levels and to improve the same quality, so that was not possible trying to eliminate those obstacles that prevent the creation of a good teaching and that often arise because of misplaced priorities WITHIN procedural and institutional policy. The model of PARQ, taking into account the size and characteristics of the University of Valle dAosta, ensures that the evaluation system is periodically and systematically reviewed for a positive or negative impact it produces, led by the process of re-evaluation of the system. It includes a meta-evaluation system, as well as evaluation, which consists of a systematic review of evaluations to determine the quality of their processes and results. It is, in particular, the latter which will be described in the contribution. In this direction, the knowledge of the literature on quality evaluation, which derives from meta-evaluation and multiple evaluation, was used in this context to focus on central aspects of decision-making processes and to identify the strengths and weaknesses of the PRESIDIOS evaluation capacity. It shows how a meaningful experience on the local construction of a system of internal quality induces necessarily a reflection on the relationship it has with the external evaluation to promote and improve overall performance and increase the satisfaction of certain standards. The contribution focuses on the characteristics of the Presidio, designed to give substance to the process of transformation of a small university and to study what affects the production and promotion of educational quality appreciated in line with the construction of an Italian and European system of higher education and the Declarations of Bologna.

Thursday, 4 October, 2012

9 : 3 0 1 1 : 0 0

O 147

Meta-Evaluation and Evaluation-Synthesis to Enhance Evaluation Knowledge and Use


A. Caspari 1
1

University of Applied Sciences Frankfurt a.M., Social Work and Health, Frankfurt am Main, Germany

While there are no uniform definitions of the concepts meta evaluation and evaluation synthesis, the former usually is used in terms of evaluation quality review and the latter in terms of synthesis of evaluation findings. Thus they have different purposes: meta-evaluations focus on the quality of one resp. mainly various evaluations itself, analysing the extent to which the evaluations prove to be methodologically reliable with the aim of identifying potential for systematic improvements in the implementation of future evaluations. In contrast, evaluation-syntheses focus on analysing the findings and results of the evaluations in order to systematically identify cross-project impact factors, i.e. factors of success or failure, with the aim of identifying potential for systematic improvements in the implementation of future projects. However, although both instruments can lead to valuable information they are not common tools in development cooperation not even at larger development organizations with an in-house evaluation unit systematically implementing a significant number of evaluations. The presentation will demonstrate that meta-evaluation and evaluation-synthesis can generate useful additional information leading to enhanced evaluation knowledge and use by the example of two studies carried out in 2010 and 2011 on behalf of the German Agency for International Cooperation (GIZ) financed by the Federal Ministry for Economic Cooperation and Development (BMZ). Both studies covered a meta-evaluation as well as an evaluation-synthesis of 15 (2010) resp. 22 (2011) final evaluation reports of Human Capacity Development Programs of the former Capacity Building International (InWEnt). For the meta-evaluation a set of about 65 criteria were used built on the basis of the German Standards for Evaluation of the German Evaluation Society DeGEval which in turn are based on the Joint Committee Standards on Evaluation. In addition the Quality-standards for Evaluation Reports of the BMZ were used, which in turn refer to the OCED/DAC standard. The check-list was supplemented by specific additional criteria focussing on the type of used research design taking into account the international debate on rigorous/quality impact evaluation. In contrast, the evaluation-synthesis was an iterative process of text analysis: a set of criteria (codes) was defined initially based on common hypotheses about success and failure factors. In the course of the first analysis loop of the evaluation reports (done with MaxQDA) further criteria were ascertained exploratively and added to further analysis loops. In 2011 it was possible to predefine a comprehensive set of codes based on the findings of 2010. This resulted in a full record of text passages for each criterion which were used for analysis and synthesis. The presentation will describe the processes and methods used for the meta-evaluation and the evaluation-synthesis. It will further show that implementing such studies annually is of even further advantage as comparisons lead to additional knowledge by highlighting some results. Keywords: Meta-evaluation; Evaluation-synthesis; Evaluation methods and practice; Development co-operation; Evaluation knowledge;
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

103

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S3-02 Strand 3

Paper session

Gender and Evaluation: Approaches and Practices II


S3-02
O 148

Gender budgeting: the northern Uganda perspective


B. Kachero 1
1

Office of the Prime Minister, Monitoring and Evaluation, Kampala, Uganda

Thursday, 4 October, 2012

9 : 3 0 1 1 : 0 0

The twentieth resolution of the Fourth High Level Forum on Aid Effectiveness held in Busan last year is to accelerate efforts to achieve gender equality and the empowerment of women through development programmes grounded in country priorities, which is in line with Government of Ugandas National Development Plan 2010/112014/15, where all Districts are required to address gender and equality issues in budgeting and implementation across the critical sectors. This paper establishes the extent to which Districts are addressing key gender and equity concerns in Northern Uganda; given the 20 years insurgence the region has been experiencing. The study utilized secondary data over two Financial Years of 2009/10 and 2010/11 with focus on education, health and agriculture sector. SPSS was used for analysis. 70 % of women were not engaged in the budgeting process, education sector registered 51 % reduction in spending in FY2010/11 and proportion of women engaged in modernised agricultural activities is still low. 70 % of health facilities did not have required number of health officer. Inadequate funding and capacity is a challenge for gender budgeting hence the need to draw implementable strategies focusing on sensitization, funding and monitoring of gender interventions across all sectors. Disaggregation of data on gender is still a challenge to many Ministries, Departments and Agencies and this has led to unclear connection between sectoral budgets to intended outputs. Keywords: Gender budgeting; Development plans;

O 149

Developing Rural Women Understanding of Social Accountability


T. Issaka Herman 1
1

IOCE/AfrEA, Ouagadougou, Burkina Faso

This presentation will focus on the different, and changing, views of social accountability in one West African nation, Burkina Faso. In the rural areas of Burkina Faso, social accountability is a concept unknown to the majority of citizens. And many in key decision-making circles prefer that. This presentation will discuss the values that influence the different views of social accountability in Burkina Faso and how these values, central to evaluation and to citizen empowerment, differ in urban and rural areas and among educated and illiterate citizens. These views, then, affect citizens expectations concerning municipal governments. Social accountability can, however, become an empowerment tool. I will describe a program developed by The National Democratic Institute for International Affairs to change non-elected womens understandings of social accountability in 21 rural municipalities and the assessments and analyses around this program that shed light on the meanings of social accountability. Keywords: Rural Women; Social Accountability;

O 150

Does womens participation in development initiatives make a difference in their lives? Evidence from three provinces in Afghanistan
C. R. Echavez 1
1

Afghanistan Research & Evaluation Unit Gender, Kabul, Afghanistan

The research specifically explored womens participation in the National Solidarity Programme (NSP)s Community Development Councils (CDCs) as well as NGO-initiated groups for microfinance under the Microfinance Investment Support Facility for Afghanistan (MISFA). It examined the effects these forms of womens participation are having on gender roles and relations within the family and local community. It further explored what has motivated and enabled women to participate in these different programmes and what has limited their participation. The study covers three provinces. A total of 453 and participants were covered in the study, including 96 who were also involved in second-round interviews. The main method used to collect data were semi-structured in-depth interviews, supplemented by focus group discussions, informal conversations and observations. The voices of the participants and informants, especially women were allowed to be heard and not of that of the researchers. Learning sessions at various levels (community, facilitating partners, provincial and project management levels) which served to collect feedback and validate the data gathered and information processed by AREU researchers, were conducted to ensure validity of findings. Thorough feedback and validation processes were made long before the final study was completed. Furthermore, engaging with stakeholders over the whole course of the research process is one way to ensure ownership on the part of programme managers and implementers, thus encouraging them to use the research findings in improving their programmes. At the community level, reporting back to the people on findings from interviews and focus group discussions addresses the ethical issue of research by maximising benefits for people involved in the study. The study returned the information gathered for people to use at their own level and in their own lives. The community were also asked what lessons they have learned from participating in either NSP or MFI development initiatives. This
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

104

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

generated a substantial amount of discussion as residents reflected on what they could do to improve project implementation, and how to address constraints that they themselves identified in the research. This study revealed that womens participation in the NSP and MFI could result in both unity and conflict, active participation for some women but drop-out and disenchantment for others. However, even those who were disenchanted due to accountability and transparency issues noted that the CDC had offered women opportunities not previously available. The CDC created a safe space for women to come together and discuss issues, problems and solutions, and the women involved perceived this particular change as a milestone in their lives.

S3-02

The changes that happened at the individual, family and community levels were brought by a number of factors other than the introduction of NSP and MFI; however, the shura and group loans provided the platform for them to emerge and become noticeable. However small they may appear, women themselves also saw many of the changes they experienced as giant steps, providing a starting point from which to negotiate for their greater empowerment. Community initiatives such as the activities initiated by the NSP and MFI therefore need to be sustained. The multifaceted nature of empowerment means that MFIs and NSP can only contribute to this process. Changes in gender relations will require a convergence of different activities in different spheres of Afghan society, and microfinance and social development programme have the potential to be one of these contributing activities. Empowerment outcomes associated only with womens participation are not guaranteed. Factors such as existing family dynamics and power of the women as well as the quality and processes of individual MFI programmes are among the factors that can support change in gender relations. Keywords: Gender; Women empowerment; Participation;

Thursday, 4 October, 2012

9 : 3 0 1 1 : 0 0

O 151

Evaluation of a Workplace Program for Womens Advancement


P. Nanda 1, A. Mishra 1, S. Walia 1, K. Bopanna 1
1

International Center for Research on Women, Social and Economic Development, New Delhi, India

Bio: Priya, Anurag and Sunayana are leading members of the Social and Economic Development Group at the Asia Regional Office of the International Centre for Research on Women (ICRW). They are involved in research, measurement and evaluation of policy and programmatic work on issues related to gender equality and poverty reduction, focusing on economic empowerment and health issues. Rationale for Entry in Paper Session: Due to the unique nature of this impact evaluation and the small number of workplace program evaluations, we would like to participate in the paper session. Objectives: To share our experience of: 1. understanding and adapting to the challenges of creating a global evaluation framework for a program in diverse settings, and 2. standardizing indicators for measurement across program sites for both monitoring and evaluation. Corporates are increasingly engaging in investing in the social capital of their workforce across the globe. Gap Inc., a leading apparel company, has also pioneered a program for female garment workers (FGWs) working in select vendor factories in some Asian countries. The Personal Advancement and Career Enhancement (P.A.C.E.) program is a workplace training program for FGWs that aims at positively impacting their lives by providing them with foundational skills and support to help them advance in their workplace and personal lives. From the outset, Gap Inc. has recognized the significance of evaluating the PA.C.E. program and its impact. As the global evaluation partner on this program, ICRW has evaluated P.A.C.E. in India and Cambodia, and is now undertaking evaluations in more countries. This impact evaluation is unique and innovative because it mirrors the program logic of dual impact by measuring impact in the personal/social and work lives of FGWs using indicators that are critical to assess the impact of a skill-based program for women. The indicators are measured on indices created (with correlated questions) for each domain, i.e. self esteem, self efficacy, work efficacy, work environment, financial efficacy, communication and gender. With globally applicable and locally measurable indicators, this evaluation is adaptable to cultural variations and time and resource constraints. Notably, the evaluation approach and tools have been adapted and tested across various program sites in India, Cambodia, Bangladesh and Vietnam, and will be used in China, Indonesia and Sri Lanka. The evaluation follows a pre-post design with a mixed method assessment using quantitative surveys in combination with qualitative information from in-depth interviews with FGWs and supervisors, and factory database information on attendance, retention and promotion. Additionally, a Global Evaluation Framework (GEF) has been developed with a set of tested, measurable, global indicators that will be tracked systematically in each program site irrespective of whether an impact evaluation will be conducted there or not. Critically, this evaluation strategy is forward looking- to accommodate the rapid expansion of P.A.C.E. that is planned for the coming years, it relies on occasional impact evaluation studies in specific cases alongside the systematic collection of GEF data across all sites. Keywords: Training program; Impact evaluation;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

105

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S1-07 Strand 1

Paper session

M&E systems and real time evaluation II


S1-07
O 152

Learnings from the Health Impact Assessment (HIA) strategy of Quebec. How networks enhance organisational changes
P. Smits 1, J. L. Denis 1, M. F. Duranceau 1, L. Jobin 2, C. Druet 2, J. Prval 3
1 2

Thursday, 4 October, 2012

ENAP, Montral (Qubec), Canada Ministere de la Sant et des Services sociaux Qubec, Qubec, Canada 3 ENAP, Montral, Canada

9 : 3 0 1 1 : 0 0

Health Impact Assessment (HIA) is an evaluation procedure to ensure that all levels of government consider the potential impact of their decisions on the health and well-being of the population. We conducted an evaluation on the impact that HIA, and other health oriented networks practices of the Quebec government, have on learning process, and on capacity building within organizations. We used multiple cases studies using embedded units of analysis. In each case, we runned semi-structured interviews and collected documentation. The analysis is based on Nonakas theory of organizational learning. Results will highlight four mecanisms occuring (and not) in ministries : socialization where exchanges around HIA occur, externalization where supports and structures are put in place, combination where HIA gets integrated to previous procedures and processes and internalization whereby HIA becomes taken-for-granted. The findings will also emphasize the importance of the coordination unit and networks in the development of HIA and in reinforcing cooperating ministries.The paper will dwell on new thinking around network increased influence of policy/decision making. Keywords: Health evaluation; Network;

O 153

Ushahidi Haiti Project evaluation: Evaluating the use of crowdsourced information from crisis affected people for emergency response
N. Morrow 1, N. Mock 2
1 2

Tulane University, Public Health Law Social Work, new orleans, USA Tulane University, Public Health, new orleans, USA

Crisis mapping is a new technique that provides crowdsourced information dynamically through a map and graphic aggregator during and after crisis events. It combines advances in mobile computing, social media and internet-based data aggregation, visualization and mapping. The Ushahidi Haiti Project was a volunteer effort that endeavored to bring together information about the needs of earthquake affected people from new media sources such as Facebook, blogs, and Twitter. Affected people were also encouraged to send text messages with their needs to a local phone number in Haiti. These messages were then classified and dynamically mapped. This paper presents the results of an independent evaluation of the Ushahidi Haiti Project. The evaluations central focus on use of the maps and reports for emergency response showed mixed results. Questions of efficiency also showed room for improvement of many processes. None-the-less, this innovative approach to crisis information was relevant to the emergency response community and will no doubt be a feature of future emergency response efforts. Keywords: Humanitarian Response; Social Media; Innovative Methods;

O 154

Harnessing the Power of Real Time Data:Learning lessons from Concern Worldwide
K. Matturi 1
1

Concern Worldwide, Strategy Advocacy and Learning, Dublin, Ireland

The nature of the work that International Non-Governmental Organisations (INGO) staff undertake makes it difficult for them to allocate the time needed to monitor and evaluate their work. The demands of donors are often relentless with respect to project implementation, budget oversight, etc that little time is put aside for Monitoring and Evaluation (M&E). The latter is often seen as a luxury at best and at worst an unwanted top-down accountability mechanism. However, the ongoing financial austerity facing a number of countries has brought the need for demonstrating evidence based results centre stage within the International aid sector. There is a growing call for results-based management, whereby development actors are asked to be accountable for and demonstrate the achievement of measurable results (Paris Declaration 2005, Busan 2011). One of the ways that Concern Worldwide has responded to this push for evidence has been to invest in digital technology as a means for measuring the impact of its overseas work. Founded in Ireland in 1968, Concern Worldwide is Irelands leading international humanitarian organisation dedicated to ending extreme hunger and transforming the lives of the worlds poorest people. In 2010, Concern received a grant from Accentures Global Giving Project to fund interventions around Conservation Agriculture (CA) in Malawi and Zambia over a three year time frame. From the start M &E was seen as key to measuring the results of the intervention. In implementing the CA interventions it was decided to use digital data gathering (DGG) technologies. This involved the use of electronic devices to collect monitoring data. This process has lead to a number of advantageous developments. Firstly, farmers were able to access data in realtime allowing them to make informed decisions. Secondly, there was a huge reduction in the amount of time spent inputting data.
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

106

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

In 2010, Concern developed a humanitarian programme in response to the 2009/2010 Niger drought and food crisis. Households in targeted villages received monthly cash transfers as part of a social protection programme. One-third of targeted villages received a monthly cash transfer via a mobile money transfer system (called zap), whereas one-third received manual cash transfers and the remaining one-third received manual cash transfers plus a mobile phone. An impact evaluation showed that the zap delivery mechanism strongly reduced the variable distribution costs for Concern, as well as programme recipients costs of obtaining the cash transfer.

S1-07

Keywords: Technology; Digital Data Gathering; Results; NGO;

Thursday, 4 October, 2012

9 : 3 0 1 1 : 0 0

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

107

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-21 Strand 2

Paper session

Multinational evaluation
S2-21
O 156

Is a similar Theory of Change suitable for the evaluation of Biodiversity Conservation through certified coffee plantations in different countries?
C. Vela 1
1

self employed, Quito, Ecuador

Thursday, 4 October, 2012

9 : 3 0 1 1 : 0 0

Objective: Certified coffee grown under forest cover is considered an important Strategy for Biodiversity Conservation. A comparative analysis was done of evaluations for this kind of projects in El Salvador and in Ecuador, looking for similarities and expecting to answer if a similar Theory of Change could be applied. Methodology: The evaluations were done separately in El Salvador and Ecuador. In El Salvador the evaluation was done applying the Objectives to Results methodology (designed for the GEF EO) with a participative design of TOC; whereas in Ecuador the evaluation followed the projects midterm and final evaluation procedures focusing on the compliance of general project objectives and predefined indicators. Nevertheless, because of the underlying similar logic of pursuing biodiversity conservation through coffee plantations, further analysis considering the differences of Ecuadors context a comparison was done to analyze if the TOC applied in El Salvador would be equally suitable in Ecuador. Both evaluations included field visits to farms, and interviews to many stake holders such as producers, certifiers and authorities. Results: Despite information gaps, the evaluation helped to identify strengths and weaknesses of a Strategy applied for over a decade in different countries within different social, economical and environmental ecological contexts. This sort of analysis could be useful for decision makers and evaluators considering the important budgets invested through bilateral and multilateral donations and loans in this kind of strategies. Keywords: Theory of Change; Biodiversity Conservation; Certified Coffee; Comparative Case Studies;

O 087

Assessing the relevance and effectiveness of ERDF support to regions with specific geographical features
B. Giordano 1, P. Van Bunnen 1, M. van Overbeke 1
1

ADE S.A, Louvain-La-Neuve, Belgium

This paper discusses some of the main findings from a 12 month study that was carried out on behalf of DG Regional Policy on the relevance and effectiveness of ERDF and Cohesion Fund support to regions with specific geographical features islands, mountainous and sparsely populated regions. This work is particularly timely given the policy discussions surrounding territorial cohesion across the EU. Focusing on the 200006 and 200713 programming periods, the paper first provides a review of some of the theoretical debates about the ways in which the territories can be defined as well as the different challenges they face and respective policy approaches that have been developed. What seems to matter more for a regions economy is not belonging to one (or several) of the three territorial types per se but the intensity and mix of the inherent characteristics it is exposed to (e.g., issues such as remoteness, accessibility, small local markets, transport cost). Second, a brief summary of the analysis of ERDF interventions in fifteen selected NUTS2 regions is discussed based on desk-based research drawing on programme data and documents for the two respective programming periods. Third, the main focus of the paper is to explore the findings generated from six case studies that were carried in various regions across the EU. The analysis of the case studies confirms that actually issues such as remoteness and accessibility are common to all regions but a very important non-geographic challenge faces all of the regions, that of demographic change. The last section, then, summarises the key policy conclusions arguing that ERDF is an appropriate tool for the development of regions with specific geographical features and that the existing framework provides the necessary funding, flexibilities and focus for effective economic development projects to be developed. The paper stresses, however, that certain improvements could be made to enhance the ways in which ERDF can be utilised in the three types of territory for the next programming period, 20142020. Keywords: Regional development; Regions with specific geographical features; ERDF and territorial cohesion;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

108

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-34 Strand 2

Panel

Theories in evaluation
S2-34
O 158

Theories in evaluation
F. L. Leeuw 1, E. Vedung 2, S. Donaldson 3, G. Henry 4
1 2 3

Thursday, 4 October, 2012

University of Maastricht, Maastricht, Netherlands University of Uppsala, Uppsala, Sweden Claremont Graduate University, Los Angelos, USA 4 University of North Carolina at Chapel Hill, North Carolina, USA

1 1 : 1 5 1 2 : 4 5

EES organized, in december 2012, a session on logic and vision in evaluation. A large part of the discussion was dedicated to the role theories in evaluation play: what do we mean when we talk about theories and evaluation? How do theories of evaluation relate to (explanatory) theories from disciplines, interdisciplines and transdisciplines? How do theories of evaluation relate to policy, program and intervention theories? What is needed to label something a theory (of evaluation)? This session plans to discuss these and related questions. Keywords: Theories; Evaluation; Transdiscipline; Validity; Relevance;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

109

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S3-22 Strand 3

Panel

Evaluation for equitable development


S3-22
O 159

Evaluation for equitable development results


M. Segone 1, B. Maria 2, S. Belen 3, H. Lundgren 4, N. York 5
1 2 3

Thursday, 4 October, 2012

1 1 : 1 5 1 2 : 4 5

UNICEF, New York, USA European Evaluation Society, Madrid, Spain UN Evaluation Group and UN Women, New York, USA 4 OECD/DAC Evaluation Network, Paris, France 5 DFID, London, United Kingdom

Chair:

Marco Segone (UNICEF Evaluation Office; Co-chair UNEG Task Force on National Evaluation Capacities; former IOCE Vice President)

Panellists: Belen Sanz (Chief, Evaluation Office, UN Women; Chair, UNEG) Hans Lundgren (Manager, OECD/DAC Evaluation Network) Maria Bustelo (President European Evaluation Society) Nick York (Chief Professional Officer for Evaluation, DFID) When world leaders adopted the Millennium Declaration in 2000, they produced an unprecedented international compact, a historic pledge to create a more peaceful, tolerant and equitable world in which the special needs of children, women and those who are worst-off can be met. The Millennium Development Goals (MDGs) are a practical manifestation of the Declarations aspiration to reduce inequity in human development among nations and peoples by 2015. The past decade has witnessed considerable progress towards the goals of reducing poverty and hunger, combating disease and child mortality, promoting gender equality, expanding education, ensuring safe drinking water and basic sanitation, and building a global partnership for development. But with the MDG deadline only a few years away, it is becoming ever clearer that reaching the poorest and most marginalized communities within countries is pivotal to the realization of the goals. A focus on equity in national public policies and programmes has now become a moral imperative. In September 2010, UNICEF launched the publication Narrowing the gap making the argument of why its important to achieve MDGs with equity. UN Women and the United Nations Evaluation Group have been working in the development of guidelines and tools to support evaluators to address human rights and gender equality dimensions in the evaluation practice. This panel aims to contribute to the international debate on how the evaluation function can contribute to achieving equitable development results by conceptualizing, designing, implementing and using evaluations focused on human rights, gender equality and equity. The panellists will address the following questions: Why evaluation should be sensitive to human rights, equity and gender equality? What are the implications (methodological, political, developmental) to evaluate pro-equity policies and programmes? What are the evaluation questions and methods appropriate to evaluate policies and programmes whose objectives are to narrow the gap between the best-off and worst-off populations? Keywords: Equity; Equality; Human rights; Development;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

110

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S4-12 Strand 4

Paper session

Evaluation of innovation policies and innovative S4-12 programmes


O 160

Using evaluation to facilitate sustainable transport


G. Ellis Ruano 1
1

Gellis Communications, Brussels, Belgium

Thursday, 4 October, 2012

1 1 : 1 5 1 2 : 4 5

The European Unionis committed to fighting road congestion and optimising existing freighttransport networks and capacities thereby contributing to the sectors economic, social and environmentally sustainable development. The European Commissions Executive Agency for Competitiveness and Innovation (EACI) therefore established the Marco Polo funding Programme (MP),aimed at European shippers, transporters and logistic operators committed to findinga new, more sustainable way to transport goods. However, in 2008, it wasdetermined that MP did not receive sufficient requests for funding, resultingin its budget being under spent. Also, the Programme suffered from a lack ofvisibility. Communications was seen as vital to redressing both these issues. The EACI decidedto conduct an evaluation of MPs communicationsprogramme, which would: assess information and communication needs of key MP stakeholders; appraise effectiveness of previous and current MP communication activities and determine their alignment with expected outcomes; and analyse communications shortcomings when it came to reaching target groups. The broadobjective of the evaluation was to determine a comprehensive strategy topromote and increase the visibility of the programme and its projects. Gellis adopted a multi-prongedmodel to evaluate the problem with MPs communications strategy, and to gaugeneeds and opinions of MPs key stakeholders. The research phase included: examination of communication outputs related to the MP Programme; use of qualitative/quantitative indicators to formulate judgement. Due to the lack ofinitial benchmarks against which indicators could be compared, the evaluationwas structured using a Research Matrix. The Research Matrix not only evaluatedthe data collected, but also established benchmark measurements for comparisonsduring future evaluations. This ResearchMatrix considered the: issues at hand; judgement criteria to determine whether issues were a success or failure; key performance indicators; sources of verification. Moreover, the Matrix also allowed for a reconstruction of high level SMART objectives behind the programme, which were previouslynon-existent. These objectives were constantly updated and redefined during theresearch process, in order for them to be feasible. Research conducted included: expert review of existing communication tools/activities; review of programme documents; media analysis; consultations with EACI staff; online surveys with beneficiaries and dissemination points; focus groups; interviews with journalists. The research phaseunearthed that at programme level,the programmes goals and messaging were not in sync, with the beneficiaries motivations for applying forfunding. At the communicationslevel, the strategy lacked SMART objectives and a comprehensive classificationof stakeholders, limited timeliness in the production and execution of outputs,an unfocused media strategy, and inefficient promotion methodologies. At the operational level, the stance of theEACI staff towards communications lacked focus and drive. Indicators thathighlight the evaluations success: Implementation of proposed Communications Plan developed by the Consultant Change in attitudes towards communications on the part of the EACI Improved effectiveness of the programme 2009: 70 proposals worth 224.1 m against budget of 61.3 m

2010: 101 proposals worth 235 m against budget of 64 m

Keywords: Communications; Europe wide; Sustainable transport; European Commission; Strategy development;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

111

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 161

Evaluating innovative, multi-site, early childhood interventions on a shoestring budget. Priorities and predicaments.
S. Bohni Nielsen 1, T. Hejgaard 2
1

S4-12

Ramboll Management Consulting, Copenhagen, Denmark Danish National Board of Health, Copenhagen, Denmark

In 2010 the Danish National Board of Health (NBH) announced the launch of a program for two innovative early childhood interventions, ICDP Health (ICDP-H) and A Good Beginning Together (AGB-T). The aim of the program was to strengthen childrens well-being in at risk families. The target population for ICDP-H is at risk families with children aged two to five years old. For AGB-T the target population is families with pregnancy (week 32) until the child is three years of age. For each intervention criteria to define at risk families were developed. Both interventions were group-based (mother and father) with 611 group sessions. ICDP-H is a short term intervention (26 months) whereas AGB-T is more extensive and long term (3 years). Each intervention was inspired by components from established interventions, but was to be implemented for the first time in Denmark. The NBH allocated 39,4 million DKK (MEUR 5,3) from governmental pool funds. 14 municipalities were granted funding to implement either AGB-T or ICDP-H in the local government setting. 500.000 DKK (EUR 68.000) was to be allocated for evaluation purposes (0,012 % of the program budget). Further, the NBH stipulated in the grant requirements that municipalities had to evaluate their own local intervention in correspondence with a central evaluation plan. The NBH faced a difficult challenge. First, it wanted a systematic evaluation methodology to collate data and gain knowledge of the interventions implementation and effectiveness across several implementation sites. Second, the budget was limited in order to conduct a rigorous evaluation. Third, the novelty of the interventions seriously put questions to whether an experimental design was too much too soon. Thus, the evaluation had three main focal areas: i) outcomes of the intervention, (ii) fidelity of implementation, (iii) context of implementation. Ultimately, a theory-based approach using a time-series design was chosen. The design will allow for subsequent quasi-experimental design. In this paper, the NBH and the external evaluator present the evaluation design and the evaluation plan for the interventions which enables the collation of data from the multiple sites, and on several topics, while operating with a limited budget. The presenters will discuss the deliberations and trade-offs made when determining the evaluation design in real-world evaluations. Keywords: Evaluation model; Innovation; Theory-based evaluation; Multi-site evaluation;

Thursday, 4 October, 2012

1 1 : 1 5 1 2 : 4 5

O 162

Credible evidence is equitable evidence in evaluation


J. Greene 1
1

University of Illinois, Educational Psychology, Illinois Champaign, USA

Evaluation is an empirical, political, and social practice in which evaluators gather and interpret data, and then use these interpretations to render consequential judgments regarding the quality and effectiveness of targeted programs and policies. The empiricalface of evaluation is grounded in its central task of gathering information in targeted contexts about a programs design, implementation, and outcomes. The political face of evaluation is engendered by its central role of making judgments of quality, which in turn inform policy decisions and directions, a role inevitably imbued with values and politics. And the social countenance of evaluation is inherent in its enactment as a relational and negotiated practical process. There are considerable pressures on evaluation today to generate credible evidence upon which to base decisions about policies, practices, and concomitant resource allocations (Coalition for Evidence-Based Policy, http://coalition4evidence.org/wordpress/; Donaldson, Christie, & Mark, 2009). The evidence most prized concerns the attainment of targeted outcomes, including the causal connections between particular interventions and these desired outcomes. The pressures for such evidence are part of the larger new public management philosophys insistence on outcomes and results. That is, the push for credible, also called scientific, evidence in government decision making must be seen as part of contemporary accountability politics. Further, accompanying the demand for credible/scientific evidence has been a privileging of a particular evaluation methodology viewed as best at generating the scientific outcome evidence desired, and this privileged methodology is the randomized experiment. This paper offers an alternative understanding of the nature of credible evidence in program evaluation, particularly democratic evaluation. In this alternative understanding, the credibility of evaluative evidence is not automatically granted via the use of particular empirical methodologies, but rather is earned through inclusive, relational, and dialogic processes of interpretation and action that happen on the ground, in context, in interaction with stakeholders. Conceptualizing credibility as earned is particularly important within a democratic vision for evaluation The argument in this paper seeks to reclaim the concept of credible evidence from its narrow definition as causal claims regarding intended outcomes and its pristine position as requiring only a methodological warrant. The argument seeks to reassert the key importance of democratic values in assessing the credibility of evidence and thus also to reframe this assessment as an inclusive, dialogic process rather than a matter of methodological purity. Well beyond good method, making meaningful and consequential judgments about the quality and effectiveness of social and educational programs require engagement, interaction, listening, and caring. Keywords: Credible evidence; Democratic evaluation;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

112

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 163

Competitiveness and Inclusiveness: Dual Criteria for Innovation Policy Evaluation


A. L. P. Cheng 1
1

NCTU and Chung-Hua Institution for Economic Research, Taipei, Taiwan

S4-12

The purpose of science and technology development is no doubt centered on the advancement of industrial production and for betterment livelihood of people. In short run, science and technology development may stimulate advancing of national competitiveness, while in the long run it may transform the national economy into high capacity for generating sustainable growth. Competitiveness is one important aspect of showing the past effort and progress on overall realized potentials of a country. Meanwhile, its accomplishment is not simply relying on competition in markets and among organizations; it is also relying on cooperation among competitors. The Framework Programmes in European Union, especially in recent years, addressed important issues covered by some programs for social inclusion, meant to harmonize the societal development driven by scientific as well as technological opportunities.

Thursday, 4 October, 2012

1 1 : 1 5 1 2 : 4 5

Social inclusion is not only the goal of Framework Programmes, but also a research instrument for targeting on high level of STI policy achievement. In this sense, the national competitiveness shares the same spirit as a driving force of cross-member states cooperation in EU. Hence, the interactive role of STI policy under the European Commission would be to shape up the status of competitiveness as well as of inclusiveness. Competitiveness and inclusion are identified in this paper as twin goals of the EU policy through Science-Technology-Innovation enhancing Framework Programmes. This paper makes uses of analytical framework of Market-Institution-Technology (M-I-T) devised by Cheng (2005) to analyze and evaluate the STI policy and effectiveness. As a result, this paper highlights the dynamism of STI policy in upgrading the competitiveness and inclusiveness in EU in terms of the role of series of FPs and the network of the Member States activities in FPs. It is followed by identifying the twin criteria to examine the achievement of STI policy in terms of the performance of FPs in their policy and programme paths. In essence, it can surely be recognized that both dual criteria are key success factors for the sustainability of European Unions STI policy. Keywords: Policy Evaluation; Competitiveness; Inclusiveness; Evaluation Criteria; Innovation Policy;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

113

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-12 Strand 2

Paper session

Evaluation of competencies
S2-12
O 164

Uses of evaluation: what happened with instrumental use?


F. Alvira 1, F. Blanco 1, A. Lahera 2, D. Betrisey 3, C. Velazquez 4, C. Mitxelena 5, M. J. Aguilar 6
1 2 3

Thursday, 4 October, 2012

1 1 : 1 5 1 2 : 4 5

UCM, Sociologia IV, Madrid, Spain UCM, Sociologia III, Madrid, Spain UCM, Antropologia, Madrid, Spain 4 UCM, Ciencia Politica I, Madrid, Spain 5 UCM, Economia Aplicada V, Madrid, Spain 6 UCLM, Trabajo Social, Cuenca, Spain

We offer a new analysis of our data from the research Project financed by the Spanish National Plan I+D entitled The uses of evaluation: the case of the Docentia Program in Spain, which is being undertaken by the research Group EVALMED of the Universidad Complutense from Madrid. The research is being carried in four Spanish Universities, three in the Madrid Region (Complutense, Autnoma and Carlos III) and one in Navarra (Universidad Pblica de Navarra). As W. R. Shadish, T. D. Cook y L. C. Leviton (1991) wrote, and more recentley Fred Carden and Marvin Alkin (2012), evaluation use is one of the key element both to understand and justify evaluations. In the past fifteen years (R. Cummings, 2002; K. E. Kirkhart, 2000; R. B. Johnson, 1998) instrumental use primacy has been questioned and many different types of use have been discovered in an attempt to better reflect reality till Kirkhart (2000) has proposed changing use for influence. In this paper we reflect on whether it makes sense to leave out instrumental use; we also explore what really instrumental use is.. We analyze first what instrumental use really means in Docentia, a program aiming at enhance the quality of teaching at Spanish universities, and later we analyze what kind of instrumental use is taking place and what the factors associated with use and non use are. All the different types of uses appearing in the relevant literature can be identified as taking place in Docentia and so happens with factors associated with use related both to characteristics of the evaluation and of the organization. The big question is what of instrumental use? Most professors recognize that evaluation (evaluation reports, or filling up the self reports, or suggestions received for teaching improvement) has pushed them to reflect on his teaching, and some times to make changes to improve it. Instrumental utilization has been recognized in a varied number of aspects: changes in teaching ways, in subjects syllabus, in the ways students participate in learning and teaching...Direct changes promoted by academic authorities prioritizing specific evaluation criteria for assigning income complements following evaluation results. Lastly, strong changes like the no renewal of contracts to non permanent professors following a negative evaluation. Are there differences between professors evaluated in Docentia and the rest of professors concerning quality of teaching? As of today, the answer is that there are no differences. Participating in Docentia has meant for the majority of professors a call to show more interest in his teaching activities. But there is a wide heterogeneity in the amount of interest shown: some professors show a strong motivation for improving the quality of his teaching; others show only motivation for formal aspects of teaching. These variations are apparentely independent of Docentia. Keywords: Uses of evaluation; Evaluation of faculty; Quality of teaching;

O 165

What do skills demonstrations reveal? The reliability and confidence in the assessment process of skills demonstrations in VET
M. Rkklinen 1
1

Finnish National Board of Eduction, Evaluation unit/ Professional Development of Education Personnel, 00530 Helsinki, Finland

Student assessments for upper secondary vocational qualifications have been reformed such that vocational competence is assessed based on skills demonstrations. Skills demonstrations enable students to demonstrate their competence either at the workplace or in other surroundings that provide an accurate representation of the functional modules relevant to their occupational proficiency. In addition to student assessments, national evaluation of learning outcomes in vocational education and training were also reformed to become skills demonstrations-based. As a result, assessment information for national evaluations will be obtained directly from skills demonstrations and separate national tests are no longer needed. However, coordination of student assessments and national evaluations involves tensions between the different aspects of assessment. The purpose of my study was to provide information on use of assessments to develop the assessment and evaluation system further based on skills demonstrations, to increase theoretical understanding of assessing vocational competence based on vocational skills demonstrations and to observe tensions present in assessments, reliability of assessments and confidence in the assessment process. In addition, this study strives to provide relevant information to facilitate drawing practical conclusions in the process of developing the assessment system based on skills demonstrations. In addition, the relationship between assessment and learning is observed. My presentation will focus on, how is reliability of assessment information ensured and how is confidence in the new system of assessing vocational competence on the basis of skills demonstrations promoted, especially on the point of view of assessors and the collaborative assessment (teacher, representatives of working life and student?s self-assessment). Skills demonstrations have many functions and purposes in political decision-making and national guidance, and the objectives of skills demonstrations tend to include both formative and summative assessment targets in parallel. Regulations concerning skills demonstrations involve balancing between tensions: there is ambivalence between control and trust in all contexts of quality assurance of assessments based
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

114

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-12

on skills demonstrations. Assessments based on skills demonstrations are considered to be an accurate and usable method of student assessment, because skills demonstrations are authentic situations and assessment is based on set criteria. Vocational skills demonstrations increase the accuracy and validity of assessment results, but reliability is an issue in national evaluation of learning outcomes. Positive effects of assessment based on skills demonstrations, such as participation and learning experiences, increase confidence in the new assessment system. Quality assurance and confidence are linked in a positive way: the same factors that promote the quality of skills demonstrations and reliability of learning outcomes also increase confidence in assessments. These factors include collaborative assessment, target and criteria-based assessment, useful feedback, participation and peer review in quality assurance of skills demonstrations. In national evaluation, confidence in skills demonstrations is undermined by uncertainty of their purpose and significance. Confidence in assessments could be boosted by increasing provision of training for assessors, clarifying assessment criteria as well as providing more material for comparison and background information to support interpretation of national learning outcomes. Keywords: Evaluation of education; Learning outcomes; Demonstrated competence; Professional and political confidence; Evaluation tensions; Participatory accountability; Peer review; Evaluation of education; Professional confidence;

Thursday, 4 October, 2012

1 1 : 1 5 1 2 : 4 5

O 166

Evaluation of competences Methodology for transcdisciplinary longitudinal study of a gap between graduates competences and labour market requirements
M. Mikos 1, A. Istenic Starcic 1
1

University of Ljubljana, FGG, Ljubjana, Slovenia

The competence concept, its modelling, needs assessment and evaluation is challenging research in education as also transdisciplinary research. Competences in contemporary society can no longer be described as fixed set of skills; they represent a dynamic combination of abilities, skills and knowledge. Competences are context dependent and reflect persons potential realised in different contexts. It is being developed in the processes of learning, education, training and work based activities. Mclelland as alternative to measurement of general cognitive abilities identifies testing of competency with the criterion sampling with the testing of authentic tasks from real life work based environments (Mclelland, 1973). The basic research project RAZKORAK (GAP / DIVIDE / DISPARITY) starting in 2011 to 2014is presented by project leader author of this paper focusing on evaluation research methods and analysing a measurement construct. The evaluation methodology with instruments design and testing is presented discussing different evaluation approaches and tradition. The role of information communication technology assisting web based evaluation in comparison with traditional paper based or oral evaluation is explored. How diverse perspectives and interests of participants including students graduates, teachers and employers effect evaluation process is outlined. Objectives are: (1) Instruments for competence potential evaluation and assessment and the analyse of divide among needs and realised competences; How Instruments support the competence evaluation and management in different areas. (2) Instruments for longitudinal analyses of competence potential are discussed in technology (in the field of civil engineering, geodesy and electrical engineering), in education, psychology and in health. Participants groups consist of students, graduates and employers. Keywords: Evaluation in education; Competences; Measurement construct; Instrument design;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

115

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-43 Strand 2

Panel

Risk Assessment, Monitoring and Evaluation in Food S2-43 Safety the case of the Codex Alimentariu
O 167

Risk Assessment, Monitoring and Evaluation in Food Safety the case of the Codex Alimentarius
Thursday, 4 October, 2012
1 1 : 1 5 1 2 : 4 5
K. Forss 1, J. Andersson 2, C. Mulholland 3
1 2

Andante, Strangnas, Sweden Sivik, Lund, Sweden 3 WHO, Codex Trust Fund, Geneva, Switzerland

The Codex Alimentarius is a global reference point for consumers, food producers, national food control agencies and the international food trade. The Codex Alimentarius system is an opportunity for all countries to take part in formulating and harmonizing food standards and ensuring their global implementation. These standards are developed through the work of the Codex Alimentarius Commission, which was established through resolutions of the FAO and the WHO in the 1960s. When the Codex Alimentarius was evaluated in the beginning of this decade, one of the shortcomings was said to be the absence of many developing countries from the negotiating tables. Food exports is an important source of income for many of these countries, and hence there is a need to apply standards in order to access export markets. The question of food safety for imports as well as for locally produced foods is also important. Within this complex system of knowledge production and use, M&E plays an important role. From the grounded practice of evaluative research on food safety and health, and up to the management of the international negotiation system, there are distinct methodological as well as cultural/social/political/economical challenges for evaluation. In this panel the connections between M&E at different levels of the food safety and health system are described and debated. The panel discussions will thus start with the practical realities of evaluative risk assessment research in the area of food safety, with particular reference to the capacity for undertaking risk assessments in developing and transition economy countries. The panel will then proceed to consider the importance of this risk assessment work for the development of equitable and relevant international food safety standards. Finally the panel will look at the use of monitoring and evaluation efforts to ensure wide and effective participation of developing and transition economy countries in international standard setting processes, using the example of the Codex Alimentarius Commission. This approach illustrates the network character of systems evaluation and explores the connections between geographic and conceptual/hierarchical levels. As a result of the panel discussion it is expected that members of the evaluation community will have a greater understanding of the connections between M&E at different levels to ensure positive outcomes for food safety specifically: The use of M&E at national level for risk assessment and its use in national policies and at international level for the development of international food safety standards; The use of M&E at international level to ensure the wide and effective participation of development and transition economy countries in international standard-setting process using the example of the Codex Alimentarius Commission. It is also expected that participants from the food safety risk assessment community will have a greater understanding of how their inputs are an important input to overall monitoring and evaluation for better management. Keywords: Design of Evaluation System; Food Safety; Risk Assessment;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

116

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S3-11 Strand 3

Paper session

Evaluation use and useability II


S3-11
O 168

Interaction research for enhancing evaluation impact


F. B. van der Meer 1, M. Kort 1, M. van Twist 1
1

Erasmus University Rotterdam, Public Administration, Rotterdam, Netherlands

Thursday, 4 October, 2012

1 1 : 1 5 1 2 : 4 5

Evaluation studies typically are meant to assess the effects of impact of a certain policy, organizational strategy, consultation arrangement or other interventions. Often evaluation fulfills a function in accountability frameworks. However, increasingly evaluation studies are also seen as a potential triggers of learning processes that directly or indirectly can contribute to improvement of practices (De Leeuw & Sonnichsen, 1994; Preskill & Torres, 1999a; Van der Meer & Edelenbos, 2006). Theory and research reviewed in the paper identify conditions and mechanisms that enhance learning and impact of evaluations. If evaluators want to increase the impact of their work, it is important to consider ways in which they can use these insights. Now, since a considerable part of the insights on the production of impact of evaluation are related to the direct or indirect interaction between evaluator and evaluated, and to the sensemaking taking place in these interaction processes, it is vital to realize that evaluators shape their action in an interactive and sensemaking context. They are engaged in sensemaking themselves (how do they interpret the situations, the positions of relevant actors and insights in the production of impact?) and they are engaged in interaction with evaluated and other actors, thus influencing, intentionally or not, the sensemaking of these other actors. These interactions, as well as the interactions among the other actors involved, determine the eventual impact of an evaluation (process) to a large extent. The paper develops a methodology to enhance evaluation impact by interaction research (Zouridis, 2003). Key questions are: What do evaluators actually do to enhance impact and why do they do that (e.g. what is their theory in use on the production of impact)? How are the impacts of their actions socially constructed? How can evaluators translate insights from impact research and theory to their actual practice? To answer such questions in the framework of interaction research, external researchers observe and participate in the interaction and sensemaking among evaluators in the actual evaluation process, they reflect with them on these processes and alternative strategies and they interview other relevant actors to assess the construction of impacts and potential alternatives. The paper describes the methodology of interaction research and its rationale in detail, with examples from a number of projects of the Dutch Court of Audit in which we were engaged along these lines. In the discussion we reflect on methodological complications, e.g. that we may have influenced the process and the substance of the evaluations, both intentionally and unintentionally. We will argue that, with an adequate level of distance and reflexivity, this may contribute to the quality and impact of the evaluations, and to valid knowledge in how impact is and can be produced in practice. Keywords: Impact of evaluation; rResearch method; Social construction; Audit institutions; Learning evaluation;

O 169

Developing Effectiveness Evaluation in Adult Social Work (EEA)


M. Kivipelto 1, P. Karjalainen 1
1

National Institute for Health and Welfare, Social Work Evaluation (FinSoc), Helsinki, Finland

This presentation examines the experiences drawn from a project, aimed at developing effectiveness evaluation measures in Finnish adult social work. The development work was carried out by the National Institute for Health and Welfare (THL), in co-operation with three communal social services departments (Helsinki, Seinjoki and Tuusula). In Finland, there are currently no appropriate measures for evaluating the effectiveness of social work among adults. In the EEA- project, a new model for measuring effectiveness in social work with adults has been developed. The theoretical base is in realist single-case evaluation. The measures have been developed with cooperation between THL and the three pilot communes. Measured concepts have been defined according to social work theory (and taking into account earlier definitions) and also with the help of social work professionals. It has noticed that the use of realist single-case evaluation makes it possible to measure the connections between specific social work methods and their results. These measures are empirically tested in the pilot communes. The testing period is documented and the results are analyzed by researchers and municipal representatives, later to be published by THL. At the moment, our research questions are: 1) What kind of measure was developed in each pilot; 2) What kind of knowledge did the resultant measures produce; 3) What was the benefit of the measure to the social work initiative; 4) What were the critical issues that emerged during the process? In this paper, we will describe the initial benefits of effectiveness evaluation measures and also consider the difficulties of demonstrating them. We also consider the developmental needs of effectiveness evaluation from the point of view of social work practice. Although the evaluation of social work is in its early stages and the development work itself has not been easy, the social workers in this project are rather enthusiastic about it. What emerged was a clear need for a common agreement about how to measure, document and analyze effectiveness within adult social work. Close cooperation with social work practice has been very important throughout the project. We have noticed that many social work phenomena are difficult to measure (e.g. deprivation, oppression, inequality), which has caused a lot of work. In social work, the effects of initiatives can be far reaching but within the confines of this project we have concentrated upon short-term effectiveness outcomes. Whilst the single-case evaluation model has been quite arduous to employ, we have also noticed its many

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

117

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

advantages. It identifies targets, methods, contexts and intervention procedures and makes systematic measurement possible by the use of new technology. Even at this early stage, we can say that it is an appropriate measure by which to monitor progress and outcomes in the field. As such, it is proposed that evaluation measures are needed that connect to practical social work situations and conditions; development in the field of the evaluation of social work effectiveness is envisaged as being a long-term effort that will require work on many different levels.

S3-11

Keywords: Social Work; Effectiveness Evaluation; Measure;

O 170

Learning from researcher-public servant networking activities related to evaluation use in public administration
P. Smits 1
1

ENAP-Universite de Montral, Canada

Thursday, 4 October, 2012

1 1 : 1 5 1 2 : 4 5

For public administration to innovate on the ways evaluation processes and reports are used, and for evaluation researchers to ground their investigations, networking activities between the two communities carry out a high potential. A researcher-public servant activity was developed over 6 months on the topic of the use of evaluation in public service decision making. It took place in Quebec, Canada. Four steps were taken : search for research fundings, identification of the activity topic, development of the activity format, strategy for diffusion of results. At each step, two groups were involved: researchers in evaluation and public servants, head of evaluation direction in ministries and professionals of evaluation. The same group of researcher documented for each step how the networking activity developed, what facilitators emerged, what challenges were encountered, and the nature and mechanisms by which network-related public actions would favor evaluation use in decision-making. The documentation is extracted from a final report, phone and written exchanges between the actors of the activity development. Results show that evaluation use is favored when there are : internal networking activities between the professionals in evaluation from various ministries of a public administration, meshing across-level for the integration of evaluation use at operational and strategic levels, internal pools of civil servants evaluators rather than private consultants or universitarians less responsive to changes of decision-makers needs. Some aspects did not attract much attention as essentials to favor evaluation use in public administration, namely: external networking activities between the professionals and researchers, continuous training on evaluation use techniques and processes, piloting innovations on evaluation use. This experience gives two main lights on evaluation use in public administration: lessons from intra-public administration networking activities, and lessons from extra researcher-public servants activities. Keywords: Public administration; Evaluation use; Networking activities; Researcher-public servant;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

118

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-06 Strand 2

Paper session

Improving EU evaluation practice


S2-06
O 171

Evaluating the Commissions integrated approach to conflict prevention and peace building
E. Clerckx 1
1

ADE, Evaluation, Louvain-la-Neuve, Belgium

Thursday, 4 October, 2012

1 1 : 1 5 1 2 : 4 5

The presentation will concern the methodological framework established to evaluate the Commissions Support to Conflict Prevention and Peace-Building in all third countries over the period 20012010. This comprehensive and complex evaluation was conducted in three separate phases with separate deliverables from 2009 to 2011. This evaluation was confronted with a specific challenge in terms of scope to be covered. Not only did it concern a very broad geographical (all third countries) and temporal scope (ten years), but the subject matter in itself posed a major challenge. Indeed, as with many donors, the Commissions strategy with respect to conflict prevention and peace building called for a so-called integrated approach. It was part in this sense of a paradigm shift whereby the Commissions support to conflict prevention and peace building was closely intertwined with its support for development cooperation as such. Indeed, it potentially spanned a very wide subject area with interventions from the short to the long term (early warning, rapid reaction, crisis management, structural stability), different sectors (demining, improving the control of small arms and light weapons, but also more indirectly, economic development, regional integration, education, health, etc), as well as issues related to a Whole of Government approach. Hence, conflict prevention and peace building became virtually all encompassing, so that evaluating it seemed like evaluating the Commissions support to development cooperation as such. Therefore the evaluation team, together with the European Commission, developed a specific approach to conduct this evaluation. The presentation will focus on the process and methodology put in place, while concluding with examples of conclusions and recommendations reached. More specifically it will focus on: A presentation of the assignment ; The consequences in terms of methodological challenges; The approach developed to tackle these challenges; The outcomes reached by the evaluation at the end of the process. Keywords: Methodology; Conflict Prevention; Peace Building; Integrated Approach;

O 172

Overcoming the challenges of evaluating EU expenditure programmes


B. Rohmer 1, M. Kuehnemund 1
1

The Evaluation Partnership, London, United Kingdom

Background: The paper draws on the authors experience, gathered over the past seven years, of evaluating a large number of EU interventions, and in particular three EU programmes in the area of customs, health, and sport. All three of these programmes share similar characteristics: they focus on facilitating transnational co-operation and networking, pursue multiple broad objectives, and use a variety of instruments to target different audiences. These characteristics (which are also shared by numerous other EU programmes in many policy areas) pose particular challenges to robust evaluation. Objectives: The paper aims to share good practices, stimulate thought, and thereby advance the discussion on evaluating EU expenditure programmes. For this purpose, the paper identifies common methodological challenges, discusses approaches and methods that have been applied successfully to (partly) overcome these challenges, and highlights specific issues and ideas that could be addressed in future evaluations. This is particularly relevant in view of the trend towards fewer budget lines, and thus even larger, potentially more complex and heterogeneous EU programmes in the next programming period (20142020). Methods: Due to the specific nature of the evaluated programmes, the measurement of their outcomes and impacts is particularly challenging. It is widely recognised that networks dont lend themselves well to (summative, counterfactual) impact evaluation. Instead, the evaluations emphasised qualitative / formative elements. The paper focuses on two methods / techniques that have proven to be useful, namely: Logic models, while difficult to get right for very complex interventions, can help bridge the gap between outputs and ultimate impacts, and help clarify and measure intermediate outcomes and results. However, the structure of traditional logic models often needs to be adapted to make them useful and meaningful for this type of programmes. In particular, external elements often affect the results, and therefore need to be duly considered and acknowledged. Case studies can be a tremendously useful tool to understand and assess the complex and diverse reality of the activities funded under these programmes. The appropriate design of such case studies is paramount sample selection, targeting and the setting of appropriate research objectives for each case study need to be carefully considered.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

119

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-06

Conclusions: The methods discussed above help to overcome some of the challenges inherent in evaluating such programmes. However, in order to make further progress towards better evaluation of EU expenditure programmes, several issues warrant further attention, including: (1) Can (semi) standardised indicators and/or benchmarks for the impacts of EU networks be found? (2) Partly due to budgetary limitations, evaluations often have to limit themselves to gathering data from participants / grantees. Is there scope for engaging ultimate beneficiaries, even though this group tends to be very wide and diverse, and is often unaware of the programme itself? (3) Recent DG REGIO guidance encourages evaluators and commissioners to decompose the programmes down to their operational instruments so as to make them more evaluable. Should this trend be extended to EU programmes in other policy areas? Keywords: Programme evaluation; European Union; Logic models; Case studies; Complexity;

O 173

Is the European Union learning from evaluations? Problems and solutions


Thursday, 4 October, 2012
1 1 : 1 5 1 2 : 4 5
L. Niklasson 1
1

Linkping University, Political Science/IEI, Linkoping, Sweden

Lars Niklasson is Associate Professor in Political Science and teaches courses on Public Administration, Evaluation methods and European Politics. He is currently doing research on the industrial policies of the EU. He has a background as an evaluator of Swedish programs and programs by the EU. The European Union supports the development of firms, regions and individuals by programs in areas like Cohesion Policy, Innovation Policy, Transportation Policy and Education Policy. Evaluation and learning from experience is very important for the development and usefulness of these policies. The Commission and national authorities have high ambitions, but the evaluations are often limited and dont provide as much information as they should. The design of evaluations and the use of these evaluations in a learning context could be improved. The paper will discuss the design (program theories) of Cohesion Policy and Innovation Policy, the goals and the instruments chosen to achieve the goals, especially the feedback mechanisms and learning. It will identify problems in the implementation and discuss solutions in terms of more analytical evaluations with a greater focus on causal analysis. The paper will be of interest for people in the EU and national authorities who commission evaluations and use the results. It will also be of interest to evaluation practitioners and the people who are involved in projects in the two policy areas. The paper is based on professional knowledge in policy analysis and evaluation methodology. It will address issues of impact analysis with qualitative methods. The topic is relevant for the overarching theme of the conference since many of these policies are implemented in networks with the EU and national actors (multi-level governance). The paper contributes to enhanced evaluation knowledge and skills by analyzing program theory and the use of counterfactuals in evaluations, especially in impact analysis with qualitative methods, which is an area where evaluation practitioners sometimes invent their own methods rather than using academic methods such as comparative case studies. The creativity and innovativeness is primarily in asking fundamental questions about the policies and suggesting new ways to overcome problems. The public interest will be advanced by a more structured thinking on evaluation, learning and the development of these policies. The ambition is to improve the impact of EU spending on these policies. Keywords: European Union; Cohesion policy; Innovation policy; Impact analysis; Qualitative methods;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

120

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S5-23 Strand 5

Panel

Monitoring and evaluating organisational culture: S5-23 searching for the right questions
O 174

Monitoring and evaluating organisational culture: searching for the right questions
Thursday, 4 October, 2012
1 1 : 1 5 1 2 : 4 5
I. Davies 1, S. Immonen 2, C. Lusthaus 3
1 2

Capacity Development Network, Paris, France CGIAR ISPC Secretariat, Rome, Italy 3 Universalia, Montreal, Canada

Names and concise bios of presenters: Ian Davies (Chair and contact person) Ian Davies provides consulting services in corporate governance, finance, management and accountability, to executive and political levels of governments, and to boards and executives of public and private bilateral and multilateral organisations in developing, transition and developed economies. He holds a post-graduate degree in public administration in management and evaluation. Sirkka Immonen Sirkka Immonen is a senior agricultural research officer at FAO with 12 years of experience in program appraisal, monitoring and evaluation of agricultural research. Charles Lusthaus For over 30 years Dr. Lusthaus was a Professor in Department of Administration and Policy Studies in Education, at McGill University. Besides his professorial responsibilities, Dr. Lusthaus was the founding Director of the Centre for Educational Leadership. The Centre provided training and organizational development services to the English Educational community in Quebec. He managed the Centre for 20 years. Rationale: This panel session will present the case of the (FAO) as an example of considering the monitoring and evaluation of organizational culture as part the performance agenda in the context of internal and external networks of actors. Objectives: The objective of the panel is to present the approach that was used to develop and implement a monitoring and evaluation practise on organisational culture change in the FAO in order to highlight the importance of process over the use of off-the-shelf indicators. The panel will also address the question of resistance to process by networks favouring off the shelf indicators and explore the possible reasons for this. Narrative and Justification: It is commonly assumed that organizational performance is driven by organizational culture. Organizational culture, in turn, relates to set of norms, values and attitudes that affect group and individual behavior within an organization. Culture in an organization both affects thinking and behaviors and is affected by them in a dynamic way. The linkages between networks of internal actors and other organisations influence the achievement of FAOs objectives and they are influenced by FAOs culture and the perceptions others have about the culture. Culture change has been part of FAOs reform since 2008 leading to development of an Internal Vision for FAO and a Culture Change Strategy. An internal team and external experts were engaged in developing a framework that could be used for monitoring culture change at FAO. The framework needed to be based on principles that reflect accumulated evidence about what is most likely to succeed for developing and implementing useful and sustainable monitoring, including performance measurement and reporting, in public organizations. The panel will discuss principles of monitoring culture change, prioritization of key aspects for monitoring, issues of methodology and data collection, and institutionalization of a monitoring process so that monitoring supports the process of change. Keywords: Organisational culture; Monitoring culture change; Resisting indicators; Evaluative process;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

121

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-45 Strand 2

Panel

The Roles and Complementarity between Monitoring S2-45 and Evaluation Functions
O 175

The Roles and Complementarity between Monitoring and Evaluation Functions


Thursday, 4 October, 2012
1 1 : 1 5 1 2 : 4 5
P. Andreo, L. Maier, E. Georgieva, D. Mouque, I. Hartwig Rationale: Monitoring and evaluation (M&E) are key management tools for the European Commission, other international organisations and national public administrations. Monitoring and evaluation are designed for different purposes but complementarity between the two instruments is key to ensure an adequate support to the decision making and reporting system which goes beyond financial execution and delivery of outputs. Planning, timing and scope (projects, programmes, policies/outputs, results, impacts) are key elements to be taken into account when designing M&E systems. However, experience shows that there is sometimes confusion and overlap between the two functions and their roles are not always defined in a clear manner. Objective: To discuss the extent to which monitoring systems should/could monitor project/programme results beyond outputs and how this affects subsequent evaluations. The panellists will also debate about the extent to which M&E should be separated within an organisation and how they should be coordinated in order for decision makers to make best use of both tools. Chair: Pedro Andreo, Head of Internal Audit Capability and Evaluation in the European Anti-Fraud Office (OLAF) since June 2012. Previously, Pedro has been the Head of Sector for Evaluation in DG Enlargement between January 2009 and May 2012. Pedro has also worked as evaluation officer in DG ELARG and has gained substantial professional experience in SFM audit with the European Court of Auditors. His professional experience also includes manager of EC Budget support programmes and internal auditor at FAO-UN. Contribution to the panel (5 min) Facilitate and moderate the panel discussions and share professional experience in the topics discussed. Panellist 1: Leo Maier Short bio: Leo is head of the DG AGRI evaluation and studies unit. He has held several positions in DG AGRI in the areas of environment, forestry, GMOs, genetic resources, CAP reform and agricultural research. He started his career as a university lecturer and spent almost 10 years at the OECD where he worked primarily in agricultural policy and trade analysis. Contribution to the panel (10 min): Leo will draw on his experience with the evaluation of the rural development policy, in particular as concerns the monitoring and evaluation framework for rural development, the key role played by output, results and impact indicators in this context, and the discussions that are currently taking place on how the link between monitoring and evaluation can be improved for the next programming period (20142020), including the importance of stakeholder consultations in this process. Panellist 2: Elena Georgieva Short bio: Elena has been currently working as an Evaluation Officer at the EU Commission, DG Enlargement since October 2010. She is task manager for a number of evaluation assignments, including country programme evaluations and multi-country thematic evaluations. Elena is a graduate of the University of Maastricht (MA in European Public Affairs). Contribution to the panel (10 min): Elena will contribute to the discussion by sharing her experience on M&E systems gathered while performing an internal evaluation of the existing monitoring systems in the context of EU pre-accession assistance to the Western Balkans and Turkey. Elena is also participating in DG Enlargements internal working group on monitoring and evaluation which is currently designing the monitoring and evaluation system of the future EU pre-accession instrument. Panellist 3: Daniel Mouqu Short bio: Daniel has been working in the evaluation unit of DG Regional Policy since 2006. He has taken the lead in introducing counterfactual and control group methods to the evaluation of regional and cohesion policy. He edits the impact of cohesion policy chapter in the Cohesion Report, which draws together monitoring and evaluation data to assess progress and impacts. He is also responsible within cohesion policy for the monitoring and evaluation of measures supporting enterprise, tackling urban deprivation and promoting the inclusion of Roma. Contribution to the panel (10 min): Daniel will draw on his various experiences in cohesion policy to assess the complementarities and differences between monitoring and evaluation. Panellist 4: Ines HARTWIG Short bio: Ines is an evaluation officer in DG Employment in the unit dealing with impact assessment and evaluation. Contribution to the panel (10 min): Ines will contribute to the panel with her experience with monitoring and evaluation of interventions in the field of employment policy.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

122

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-07 Strand 2

Paper session

Evaluating innovation
S2-07
O 176

A Finnish workshop as a collaborative audit approach to meet the demands of complexity: the case of innovation
T. Oksanen 1
1

State Audit Office, Performance Audit, Helsinki, Finland

Thursday, 4 October, 2012

1 1 : 1 5 1 2 : 4 5

According its strategy the State Audit Office of Finland (later NAO) is going to report to the Parliament of Finland at 2013 of the state of the Finnish innovation system. So far, at the NAO:s audit of the R & D evaluations in Finland, presented at the evaluation conference in London 2006, emerged, that in Finland here is strong but functionally one-sided R & D evaluation culture, which is dominated by the development and knowledge perspective of evaluation, without real accountability perspective. In NAO:s presentation and final report of the audit were supposed that the gap between evaluation and politics in Finland cannot be solved without a new kind of evaluation culture, new economical impetus, new evaluation politics and better planned evaluation system. Because nothing decisive has happened after 2006 in Finland in this area it seems that this is the most important NAO is able to report to the Parliament of Finland at 2013. On the other side, firstly, audit organisation itself is surrounded by the problems, mentioned above, too. In a real complex life audit office cannot be real outsider of the knowledge system but the part of it. This means that NAO`s doesn?t have any Eye of God to catch up the problems of the innovation system, mentioned above. Secondly, NAO (or any other audit office) has not resources of its own to make policy-relevant evaluations of the Finnish Innovation system. Thirdly, nor has it enough time and other resources to build up a high level metaevaluation culture and expertise in to the office, at least until the reporting at 2013. In this situation NAO has to take a proactive role. It has to challenge both the official knowledge of the Finnish innovation system and the traditional audit methods and complete them with new co-operative audit approaches and methods. To organise and pilot this new approach the audit office organised two workshops (Real knowledge management of the Finnish Innovation system? and The Quality and effectiveness of Finnish innovation system at the 2011. The themes of the workshop were follows: What is the situation in Finnish R&D and innovation knowledge management just now? How does the different peaces of the knowledge management chain (production, organising, synthesizing, communication and approving the knowledge as a life cycle of knowledge) contribute or restricte each others? What is working in the Finnish Innovation System, what is not working and what should be done? Because the participation in to the workshops were active and the theme of R&D governance were priorizated highly at the workshops, the NAO of Finland decided to go one with this new dialogical working and organized a brainstorm at the March 2012 (The governance of the Finnish Education, research and Innovation Systems). The most important challenge for the pilots mentioned above can be defined as follows: how to balance the traditional audit values (reliability, validity) with the partly new values of workshops (openness and relevance) and how to stabilize later with the public authority and status of the public audit office. The audit office (e.g.) is not able to approve judicial means against actors that are unwilling to participate workshop. Nor has it official ways to impose actors to produce some new or better information for the workshop. For the audit office the most important way to control the problems of new informal co-operation is the publicity. Publicity means that actors control each others basically with the demands of trust, legitimity and openness, not with obligations. In my presentation I would like to analyse the different point of views for approving more dialogical approaches of performance audit at the (post)modern society. Keywords: Collaborative audit approach;

O 177

Future oriented system assessment to aid strategic decision-making in complex socio-technical environments
K. Hyytinen 1, M. Nieminen 2
1 2

VTT Technical Research Centre of Finland, Innovation studies, Espoo, Finland VTT Technical Research Centre of Finland, Innovation studies, Tampere, Finland

This paper introduces a systemic and future-oriented evaluation approach designed to meet the challenges of the changing innovation environment. The introduced framework combines different R&D evaluation methods to support strategic decision-making in networked and complex socio-technical environments. The approach interfaces with the other evaluation approaches that have been developed to strengthen interactive elements in evaluation. Our paper discusses the special characteristics and additional value our approach has compared to other approaches. The nature of innovation as well as the scope of innovation policy are transforming in a way that has not been taken into account in the traditional evaluation practices. Innovation policy is increasingly a horizontal and network-based policy field which calls for broader approaches in evaluation (e.g. Georghiou 1998, Arnold 2004, Endqvist 2006). While linear model of innovation has been largely replaced by a systemic view of the innovation process, current evaluation methods are still largely based on the idea of linear innovation process. Typically innovation policy evaluation has also lack elements of learning and foresight to support strategic priority setting and steering, even though the growth of complexity requires future oriented approaches (e.g. Kuhlmann 2001, OECD 2005). Complexity in the decision making emphasizes that the evaluation activities are interlinked to the strategic management of and more systemic approach to the governance and target setting. The presented approach is designed to meet these challenges.
ABSTRACT BOOK

W W W. E U R O P E A N E VA L U AT I O N . O R G

123

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

Our approach interfaces with other evaluation approaches that has been developed since the 1990s to strengthen participatory and interactive elements in evaluation. Examples of such approaches are empowerment evaluation (Fettermann 2001), participatory evaluation (Cousins & Earl 1995) and developmental evaluation (Patton 1994). Common feature of the approaches are, that they aim to support the better utilisation of the information by strengthening the interactivity (Patton 1997). In addition, all the approaches see the evaluation as a process that includes multiple actors in the process. Traditional innovation policy evaluations differ from the participatory approaches because they usually lack the interactive element.

S2-07

In relation to the above-mentioned approaches future oriented impact assessment have some differences and added value for theoretical and practical discussions on foresight and impact assessment: Firstly, it seeks to integrate foresight activities and traditional impact assessment methodologies to be a systematic part of the strategic and operational R&D management process. Secondly, the approach supports the mixed and complementary use of information and data, and it systematically integrates different methods into strategic management toolbox of R&D.

Thursday, 4 October, 2012

1 1 : 1 5 1 2 : 4 5

Thirdly, strategic management is a continuous and iterative process in continuously and rapidly changing environments. The approach emphasises the continuity of the foresight and evaluation processes as part of the strategic development and management processes. Fourthly, it incorporates the perspective of implementation in the evaluation process. Our approach addresses these challenges by completing foresight and evaluation with the elements of societal embedding and transition management (e.g. Kivisaari et. al. 2009; Loorbach & Rotmans 2010). Keywords: System assessment; Impact assessment; R&D evaluation; Foresight; Societal embedding;

O 178

Evaluation of Innovation in Rural Development Programmes


K. Pollermann 1, P. Raue 1, G. Schnaut 1
1

Johann Heinrich von Thnen-Institut, Institute of Rural Studies, Brunswig, Germany

Facing challenges like economic problems, demographic change or matters of renewable energy, one crucial issue in Rural Development Programmes (RDP) is innovation. To maximize the benefits of innovation, good performance is required in the different steps of an innovation process including efforts to: a) create ideas, b) try out projects and c) circulate success, in the sense of distributing good practice to other rural areas. The evaluation of innovation has to address these three steps. Thereby the new conditions of a networked society are relevant for the innovation processes themselves as well as for the evaluation process. In the session, the possibilities and difficulties surrounding a corresponding evaluation design will be discussed. The findings of the evaluation of Rural Development Programs (RDPs) in six federal states in Germany are used to show a possible approach to evaluate innovation. One part of Rural Development Programs funded by the European Union, which explicitly addresses innovation, is LEADER: a bottom up-oriented, participatory approach with a cooperation of local actors in rural areas. Stakeholders of different institutions and origins come together in a Local Action Group (LAG) as a kind of a public-private partnership and make decisions about the financial support for projects. A general assumption in this funding programme is that the networking and cooperation of stakeholders from different sectors plays an important role in creating new ideas and advancing innovations. LEADER provides opportunities to realise projects which try out new solutions and meet the specific needs in the region. A variety of LEADER-specific regional, national and international networks exist to foster the exchange of knowledge. In this context it is also interesting to observe how potential advantages of a networked society, with their fast and decentralised information exchange, are used to encourage innovation for rural development. To explore the functionality of innovation case studies have been conducted in nine regions and three surveys were carried out with written questionnaires (project initiators; members of the LAGs; LAG-managers). One result is that the possibilities of funding experimental or innovative projects via LEADER depends very much on the extent to which the RDPs are able to provide a suitable framework to fund projects outside the standard menu of measures. Although in theory innovation plays an important part in LEADER, in practice it is quite limited. This assumption is underpinned by the results of the survey of LAG-managers, who also noticed a deterioration in comparison with the possibilities of the previous funding period (LEADER+). Some federal states in Germany have already made improvements within this funding period because of these problems. Finally the future role of the new information technologies and networks in an evaluation process will be reconsidered: What are the potentials to get better evaluation results and for a faster utilisation of evaluation results to improve the programs? Keywords: Rural development; Leader; Innovation; Network;

O 179

A Clean break? Evaluating innovation policy instruments through network analysis. The case of the Finnish environment and energy cluster
K. Lahteenmaki-Smith 1, T. Jacobson 2, J. Jussila 2
1 2

Ramboll Management Consulting, Helsinki, Finland Cleen Ltd, HELSINKI, Finland

Innovation policy is one of the policy fields where projects, programmes and partnerships have become the main mode of organization and the means of policy implementation. The paper proposed here is based on a functional network analysis of CLEEN Ltd, The Strategic Centre for Science, Technology and Innovation (SHOK) of the Finnish energy and environment cluster. The main research questions posed relate to the interfaces between project, programme and policy levels, as they seek to consolidate national and international strategic goals and new modes of innovation governance and implementation. The role of new organizational modes as networks of projects and as carriers of change is investigated. The role of networks as change agents is explored as a qualitative shift towards new governance modes, requiring

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

124

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

new analytical tools and evaluation approaches. The main goal of the analysis is to better understand what is required of evaluation methodologies in these organizations, representing a means of renewal in innovation policy. A key question is how these types of network-based instruments differ from the more permanent innovation policy structures, R&D programmes and traditional project funding instruments, and how these differences should be taken into account in evaluation. A starting point of this study undertaken was the need to understand better the functions and linkages of these types of innovation policy instruments. The aim was to map and analyse the state of affairs in the key functional networks of Cleen Ltd and to identify niches in this intertwined network in order to better steer, manage and lead the networks. The aim of the project was to contribute to developing a working model that could lead to fully utilizing the whole CLEEN network and its resources, its competence and expertise. This is seen as essential in developing a more competitive R&D&I practice, thereby also balancing in the methodological setup the operational and strategic levels. The theoretical background relies largely on social and actor-network theory, as well as theories of knowledge brokering. The key functions for knowledge brokerage acknowledged in the literature include informing, connecting, matchmaking, pursuing focused collaboration, establishing strategic collaboration and finally building sustainable institutions. (Michaels 2005). Each of these and their significance to the eventual shift from project to a network culture will be discussed, together with the most appropriate methods and questions to be posed in order to evaluate such network roles.

S2-07

Thursday, 4 October, 2012

1 1 : 1 5 1 2 : 4 5

The analysis will seek to provide us with a possible roadmap to better designing evaluations with a network governance focus. It will also help develop evaluative practice in the innovation policy area, seeking to balance the transformative and instrumental roles of networks, as well as the strategic versus operational interests. References Lhteenmki-Smith et al. (2012): A Cleen break? New tools for innovation management in the Finnish environmental and energy cluster. Project report. Michaels, S. (2009): Matching knowledge brokering strategies to environmental policy problems and settings. Environmental Science and Policy 12 (2009), 9941011.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

125

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S3-27 Strand 3

Panel

Evaluating Research Excellence for Evidence-based S3-27 Policy: The Important Role of Organizational Context
O 180

Evaluating Research Excellence for Evidence-based Policy: The Important Role of Organizational Context
Thursday, 4 October, 2012
1 1 : 1 5 1 2 : 4 5
C. Duggan 1, E. Mohammed 2, S. Mull 3, Z. Ofir 4, T. Schwandt 5
1 2

International Development Research Centre, Evaluation Unit, Ottawa Ontario, Canada International Institute for Environment and Development, London, United Kingdom 3 Global Development Network, New Delhi, India 4 International Evaluation Advisor, Johannesburg, Republic of South Africa 5 University of Illinois Champaign, Department of Educational Psychology, Champaign, USA

Increasing calls for evidence-based policy and rigorous research are of great significance for researchers and organizations conducting or funding research. Research evidence that is meant to build policy and practice must be excellent. While excellence is desirable in any type of research, arguably the stakes are higher when findings are meant to influence decisions that affect peoples well-being. All concerned with evidence-based or evidence-informed policy therefore have the responsibility to interrogate current definitions and measures for research excellence and in particular, its intersection with research quality, which is most often defined as research that is methodologically rigorous and scientifically robust. This intersection between excellence and quality is not simply an ivory tower issue, but one with important implications for practice This panel is the second of two IDRC-chaired panels looking at research excellence. It expands upon the foundational discussions raised in the What is Excellent? panel, noting that although we are living in a more globalized and networked world where research is generated and used by many, differing contexts continue to bedevil the search for common standards. The four panelists contend that the evaluation of research requires not only a focus on quality and knowledge-to-action theories. It also requires deep understandings of the sites in which research is incubated. The panel thus addresses a practice-based evaluation dilemma: How does organisational context influence the understanding and evaluation of excellence in research? And what does this mean for evaluating the impacts of research? While located in quite different organizational contexts, the papers address several cross cutting themes such as mission, norms and culture. The paper by Thomas Schwandt draws on recent developments in cognitive science to argue that understanding research quality, research excellence, research impact, and research use is not just a matter of learning a set of abstract principles or free-floating best practices that are then applied to practice. It is a matter of person-environment interaction. Rather than a person being in an environment where knowledge generated elsewhere is applied or put to use, person and environment are viewed as parts of a mutually constructed whole. In her paper, Zenda Ofir illustrates this concept by presenting the results of a recent strategic review of current practices in cultivating research excellence by the International Development Research Centre (IDRC) headquartered in Canada. The paper highlights the drivers for different understandings of research excellence, the variety of frameworks and models used to evaluate it, and the implications for practice in organizations that aim to promote evidence-based policies and strategies. David Dodmans paper describes the outcomes of a process employed by the International Institute for Environment and Development to develop a vision of how excellent policy and action research that contributes to sustainable development can be carried out, measured, and enabled. The paper discusses how the specific characteristics of excellent research in international, policy-oriented, contexts must balance principles, lessons from practice, meaningful stakeholder engagement and issues of rigour and reliability. Savi Mull of the Global Development Network discusses GDNs different quality review processes and subsequent results from independent evaluations. The paper addresses the implications of various review criteria employed for judging academic excellence, including publication potential, juxtaposed with the extent to which the research output addresses policy questions; and touches upon the challenges of reviewing research capacity, research quality and policy relevancy. Keywords: Organizations; Evaluating Research; Research Quality; Knowledge translation;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

126

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S5-05 Strand 5

Paper session

Evaluation and governance II


S5-05
O 181

Evaluating the implementation of open government systems: the Brazilian case in comparative perspective
F. Rigout 1, C. Cirillo 1
1

Plan Politicas Publicas, Research, Sao Paulo, Brazil

Thursday, 4 October, 2012

1 1 : 1 5 1 2 : 4 5

The Open Government Partnership, a multilateral initiative founded in September of 2011 by Brazil, Indonesia, Mexico, Norway, Philippines, South Africa, United Kingdom and the United States has now expanded to include 53 countries that have agreed to make their administrations more transparent by guaranteeing free public access to state-held information. In this paper we explore the ongoing Brazilian experience with the regulation of citizens constitutional rights to access government files, and the development of technical means for universalizing such access. The Freedom of Information (F.I.) Bill was passed on November 18, 2011 after two and a half years of parliamentary debates and intense media attention regarding its consequences for the disclosure of military intelligence concerning human rights violations during authoritarian periods. The law, however, has a much broader aim. It mandates that public data under the custody of all branches and levels of government be available instantly; declassifies all but select military and Foreign Office documents; bans justification requests; and imposes severe penalties on gatekeeping. Existing transparency tools include the online disclosure of all public spending, the introduction of ombudsman offices, electronic government portals and participatory budgets. Still, there are several agencies, state governments, archives and a wide swath of municipalities with very little in place in terms of systems for accessing public data, posing challenges to the full implementation of open governments as the F.I. law prescribes. This uneven institutional terrain evidenced the need for developing performance evaluation instruments designed to identify best practices and make compliance possible even for the most deprived bureaucracies. UNESCO has supported this effort by offering technical support via evaluation projects the authors have participated in. The paper will describe the results of our research to assess the readiness of government institutions to deliver public information, discuss the main findings and conclude with the placement of this experience in international perspective. Its empirical section will summarize the findings of three studies: 1. An attitude assessment based on interviews with civil servants regarding the merits of disclosing state-held information to the public, carried out in 2011. This qualitative study discusses institutional and cultural rewards to secrecy in government bureaucracies. 2. An evaluation toolkit with indicators for determining the readiness of government record-keeping institutions to live up to the newly-imposed accountability demands. This study is in progress at the National Archives and will be finalized at the end of May 2012. 3. Results from a survey of the entire federal executive branch regarding their achievements and challenges in the implementation of public access to information tools, under UNESCO sponsorship. With the presentation and discussion of these results we hope to contribute findings, but more importantly provide evaluation tools and indicators to be applied in other nations facing similar challenges. Fabrizio Rigout, Ph.D, is a sociologist working in public sector evaluation since 2007. Camila Cirillo is a consultant in policy evaluation at Plan Politicas Publicas. Keywords: Freedom of Information; Open governments; Accountability;

O 182

Institutionalization Of Evaluation In Finland


P. Ahonen 1
1

University of Helsinki, Political and Economic Studies, Helsinki, Finland

The purpose of the paper is to analyze the adaptation of global models and scripts of evaluation in Finland, applying an institutionalist (neo-institutionalist) theoretical perspective. The objectives of the paper comprise of elaborating upon four working hypotheses. (1) Evaluation has institutionalized itself in Finland building upon selective diffusion and substantial modification of global categories, classifications, conceptual boundaries and elements of language of evaluation, accomplishing a unique and anything but homogeneous national hybrid. (2) In Finland, the agents of evaluation have come to bear numerous, including hybrid types of agency for their principals, and, respectively, they represent remarkably complex and diverse types of adaptations of global standards, principles and approaches of evaluation. (3) In Finland, evaluation orients itself in some respects towards enhancing performance and in others, towards supporting legitimacy with only loose coupling with the performance aspect. (4) Periods of radical institutional change in evaluation pursued in Finland (especially 19871991, 19941997 and since 2008) bear witness of diffusion of further global models and scripts deemed suitable, and their adaptation in the national institutional circumstances. The paper continues from what earlier empirical studies by the author and his associates have rendered on the state of evaluation in Finland, with special reference to the countrys national central government. Keywords: Legitimacy; Loose coupling; Performance; Agency; Language;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

127

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 183

Annual reporting as an instrument for building evaluation capacity


K. Dyrkorn 1
1

Ramboll Management AS, Oslo, Norway

S5-05

This abstract proposes an article that focuses on annual reporting in the Norwegian school sector, and poses the question whether this reporting contributes to building evaluation capacity. Since 2009 local school owners, that is municipalities and county municipalities, have been required by law to report on the state, or status, of their respective schools to local authorities. This reporting constitutes one of several elements in a National quality assessment system. School owners are required to report on schools results when it comes to learning outcomes, learning environment and completion of education. The Directorate of Education in Norway has developed web based tools that school owners may use in their reporting. These tools include a system for downloading schools results, and aggregation and analysis of the results. A relevant question in this respect is whether this results based reporting contributes to quality development in the Norwegian school sector.

Thursday, 4 October, 2012

1 1 : 1 5 1 2 : 4 5

And further, does it contribute to building evaluation capacity? This points us toward the concept evaluation capacity, which in fact has received a good deal of attention in the evaluation field. Some claim that studies have focused more on how to build it, rather than on what it actually is (Nielsen, Lemire & Skov, 2011). This article will explore the contents of the concept, and use the case of annual reporting in Norway as an empirical base. The purpose of annual reporting can be understood as to contribute to the improvement of schools in Norway, and this is to be achieved through a process of reporting on the state of the schools. Evaluation capacity, as capacity to assess and evaluate results, is therefore a means to an ends. However, it is a vital means, as it constitutes the link between writing a report about a situation, and knowing what to do about it. The requirement of annual reporting is not only relevant to the field of evaluation capacity building and quality development, but also illustrates national authorities possibilities in a networked society when it comes to providing tools and systems for monitoring and evaluation. It is also an aim for the author to compare the practice in Norway to practices in other comparable countries. Keywords: Evaluation capacity building; Monitoring and evaluation; Education;

O 184

Developing Evaluation Standards in Uganda


F. Mugerwa 1
1

Office of the Prime Minister, Uganda

The development of evaluation standards in Uganda is part of a wider effort to institutionalize evaluations within public policy processes. In Uganda, the majority of evaluations is commissioned and managed by Development Partners. Government does not yet undertake regular evaluations of public policies and programmes. Only an estimated ten percent of public investments are currently covered . This means that lessons about which investments are successful and which are not, are often not being learned and hence policy making is not benefitting from evidence. In accordance with Article 108A of the Constitution of the Republic of Uganda, the Office of the Prime Minister (OPM) has taken leadership in the monitoring and evaluation of Government policies and programmes. Against this background, OPM has set up a Government Evaluation Facility (GEF). GEFs role is firstly to design, conduct, commission, and disseminate evaluations on public policies and major public investments, as directed by Cabinet; and secondly, to oversee improvements in the quality and utility of evaluations conducted across Government at a decentralized level. The facility is guided by an Evaluation Sub-Committee (ESC), a sub-committee of the National Monitoring and Evaluation Technical Working Group comprising of senior technical officers from public and private institutions. The ESC is also overseeing the development of evaluation standards which are currently being prepared in close cooperation with the Ugandan Evaluation Association (UEA), OPM and other stakeholders, including civil society. The standards shall be used by evaluation managers/decision makers to ensure that the growing number of evaluations that are conducted in Uganda conform to set standards. The assumption is that conforming to these standards will result in high quality evaluations. The standards are also expected to help comparing the effectiveness of different programs/policies having been evaluated based on the same standards. In addition they will be used for setting rules of conduct, promoting accountability & good practices among evaluators. Faced with the task of developing standards, OPM wants to use this opportunity to consult with other professionals and evaluation societies on the following issues: 1. In a country where the professional evaluation community is only emerging, the process of standards development could be an opportunity to gain clarity on the institutional roles and strengthen the emerging evaluation association which is still in its infancy. 2. The extent to which standards will help to improve the quality of evaluations might be overrated. What are the experiences from other evaluation associations? 3. The concepts of guidelines, principles and standards are sometimes confused. In Uganda, choice will have to be made on whether to adopt the more general evaluation principles or more detailed quality guidance. 4. The question has been asked on why to develop standards for Uganda rather than adopting existing international standards. What should be specific about Ugandan standards? 5. A major issue for discussion is who should approve and enforce the standards in Uganda. OPM invites the participants of this conference who have experience in developing and using standards in their countries to provide their views with regard to the above issues. Of a total of 85 evaluations conducted in Uganda between 2005 and 2008, only ten were commissioned and/or co-managed by Government of Uganda. Source: Office of the Prime Minister, 2009 The Public Investment Plan reveals that on average 60 projects close each year. On average over the period 200508 (including development partner-financed and managed evaluations), 6 evaluations were being conducted per annum. This equates to 10% coverage of projects by evaluation.
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

128

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-17 Strand 2

Paper session

Innovative methods in evaluation


S2-17
O 185

Village mapping as tool for progress assessment and development intervention: Case study of Maniema Province, Democratic Republic of Congo
S. E. Yakeu Djiam 1

Thursday, 4 October, 2012

1 1 : 1 5 1 2 : 4 5

African Evaluation Association, Borad Member Representative of Central Africa, Yaounde, Cameroon

The present paper derives from an external mission for technical assistance conducted from April to July 2010 with the Community Action Program for a Sustainable Development (PACDEV) of Care International in Democratic Republic of Congo (DRC). After more than five years working for community recovery with local communities in Maniema Province (post-conflict environment), Care decided in the third phase of the program to map progress in order to reflect on constraints and better tackle challenges in the area of intervention. For this concern, Village mapping has been identified for a participative planning of community development projects. With a triangulation of techniques, tools, sources and actors, program staff has been trained using the MARP method commonly known as Methodes Acclres de Recherche Participative with the Learning by doing approach. The same process had been developed with local communities in 27 villages of Kasongo and Wamaza districts. According to the point of interest of the organization, progress and future actions have been reported in five (5) main domains of intervention which are: Agriculture, Livestock, Credits & saving, Infrastructures, and Local governance. Among others, the following lessons have been reported: (i) Village mapping report (Local development plan) is a document of reference for decision making mostly in post-conflict areas for progress assessment and the prioritization of future actions; (ii) The program activities deriving from the base (beneficiaries) give more incentive to actors to feel comfortably as owners of the intervention (appropriation); (iiI) Participative methods offer more than one opportunity to all members of a community to getting involve in development process (empowerment); (iv) Participative methods constitute a force for conflicts resolution and peace establishment in the areas where there is no usually common dialogue (Muslim communities); and (v) Village Information Centre (Community Radio) is key tool in boosting people confidence. Keywords: Village mapping; Progress assessment; Development intervention;

O 186

Novel evaluation concept for clusters and networks Prerequisites of a universal and comprehensive evaluation system
S. Kind 1
1

iit Institute for Innovation + Technology c/o VDI/VDE Innovation + Technik GmbH, Berlin, Germany

During the past 15 years clusters and innovative networks have gained more and more importance as an element of economic development and innovation strategies of the European Union and its Member States. After years of cluster promotion and support, the effects and impacts of clusters and networks require higher attention. Policymakers and programme owners are increasingly searching for information on how the desired effects (impacts) have been achieved and what kind of changes in programme schemes lead to more efficient outcomes. Thus, the evaluation of clusters and networks is becoming increasingly critical and plays an ever more strategic role. A concept or evaluation design respectively finds itself facing several challenges: A common evaluation system should be applicable to clusters and networks throughout Europe. Therefore it is mandatory to take into consideration prerequisites of clusters within their individual policy and geographical contexts as well as key characteristics such as: research-driven versus industry-driven; a small or large share of public funding; structure; governance; age and stage in the cluster life cycle and size. The evaluation system that shall be presented was developed by iit Institute for Innovation and Technology in close cooperation with cluster policy makers, programme owners and cluster managers. (It was primarily developed in the context of a project for the Ministry of Economic Affairs, Transport and Innovation of the Free and Hanseatic City of Hamburg in 2011) It provides a practical approach applicable to different types of cluster programmes, clusters and networks throughout Europe. The presentation will introduce this novel cluster and evaluation concept and focus on the following topics: Description of the considerations the concept was based on in order to tackle challenges Description of the different dimensions with regard to cluster policy intervention and the according evaluation subjects Introduction of the evaluation model adapted to cluster and network requirements Insight into indicator categories and indicators applied to evaluate cluster policy, cluster management organisation and cluster participants Description of methods and showing some exemplary evaluation results Pros and cons, challenges of this approach The prese nta ti on will be c onc lude d with a brie f di sc uss ion wi th the a udi e nce . Keywords: Cluster; Network; Evaluation; Methodology; Cluster policy;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

129

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 187

Theory based evaluation in international development evolution or revolution?


D. Loveridge 1
1

Independent Consultant, Dar es Salaam, Tanzania

S2-17

Thursday, 4 October, 2012

1 1 : 1 5 1 2 : 4 5

In recent years, there has been a growing interest within the some areas of the international development community in evaluation approaches based on theories of change (ToC). Linked with discussions on complexity, systems thinking and the importance of understanding context and its influence on change processes, theory of change appears in blogs, discussion lists and new research papers every few weeks. Some development organisations are now requiring theories of change to be articulated as part of programme design processes. Yet, the idea of theory-based evaluation has been around since the early 1970s and has not gathered a substantial following to date. Proponents have highlighted the benefits of ToC approaches as improving design, implementation and evaluation. However, others have suggested that ToC approaches have not gained favour because of the interest in variables and statistical associations have led to a disinterest in understanding processes and causality; differing epistemological and ontological positions by academics led to a lack of critical engagement across perspectives as they hold onto their established theoretical positions; and mechanism-based approaches are seen as an avenue to generate better grounded theories as opposed to grand theories. Is the current interest in ToC and international development evaluation an evolution or revolution? This paper examines possible reasons for the growing interest in ToC within the context of international development, some of the opportunities for it to grow and some of the challenges that the approach may face, or continue to face, in being adopted. The discussion draws on the authors research examining theories of change underpinning the Government of Tanzanias initiatives to development public sector monitoring and evaluation capacity as well as her practical experience on development programmes. The paper is relevant to current debates and discussions within international development evaluation, and the conference theme since international development evaluation spans different networks and societies and different beliefs and assumptions about how development, or change, occurs. If more development organisations require ToC to be articulated as part of their organisational procedures, there is a growing need for commissioners and practitioners to understand potential benefits and difficulties. Keywords: Theory based evaluation; International development evaluation;

O 188

Theory Based Evaluation a wealth of approaches and an untapped potential


M. Rich 1
1

European Commission Directorate General Regional Policy, Evaluation, Brussels, Belgium

The Theory Based Evaluation approach was developed on the basis of Pawson and Tilleys work on Realist Evaluation in the late 1990. Since then, many works refined the approach dressing it up with new names: Theory of Change, Contribution Analysis, Elicitation Method or General Elimination Methodology are some of them. Theory Based Evaluation is particularly relevant policy makers and programmers as it explains the rationale for an intervention to be effective -or not- in a given context. It can be used before, during and after the implementation of a programme. Like impact evaluations based on control or comparison groups, it can assess the counterfactual of an intervention. Since, the ERDF programmes evaluations do not fully exploit this approach for two main reasons: programmes rarely articulate a clear intervention logic and their evaluators misuse Theory Based approaches, either applying them without sufficient rigour, either omitting to triangulate their findings with different evaluation techniques. This article will present different approaches, current practices in ERDF programmes evaluations and how Theory Based evaluations could better contribute to improve policy making. Keywords: Theory based evaluation; Programme evaluation; Structural Funds; Cohesion Policy;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

130

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-05 Strand 2

Paper session

Probing the logic of evaluation logics


S2-05
O 189

A fresh look at the Intervention Logic of Structural Funds


V. Gaffey 1
1

European Commission Directorate General Regional Policy, Evaluation, Brussels, Belgium

In the 1990s, the European Commission initiated the MEANS programme of evaluation guidance for socio-economic programmes, primarily for those co-financed with the Structural Funds. This initiative came up with an intervention logic which has remained in place ever since.

Thursday, 4 October, 2012

1 4 : 0 0 1 5 : 3 0

The Directorate General for Regional Policy has recently been taking a fresh look at the logical framework. We have examined it drawing on experiences from three programming periods: from the perspective of an intensive ex post evaluation of the 20002006 programming period; from the perspective of reporting on the ongoing performance of current programmes; and from the perspective of designing a policy for 20142020 with a stronger result orientation. The conclusion of this work is that our intervention logic was never entirely clear. We cannot in practice distinguish between a short-term direct effect (result) and a longer-term, indirect effect (impact). We have never actually measured impacts defined like this. With the increasing focus on outcomes in the international literature and the developments concerning the evaluation of impact defined as the change that can credibly be attributed to an intervention, we realise that we need to clarify our intervention logic. This paper will outline the experiences of the Directorate General for Regional Policy and its proposals for a re-articulation of the logic of our interventions and the terminology we use in this regard. Keywords: Intervention logic; Structural Funds; Result; Impact;

O 190

Shared learning and participatory evaluation. The systematization approach for development programmes
E. Tapella 1
1

ReLAC (Latin America Evaluation Network) and National University of San Juan Argentina, Systematizacion co-ordinator and professor of Social Planning and Evaluation, San Juan, Argentina

In the Latin American development field there is a wide range of insufficiently known or properly valued experiences. By applying linear cause-effect logic models, evaluation practice has mainly focused on measuring performance and success in its attempt at demonstrating accountability to external authorities. However, development interventions are multifaceted and complex systems, with different actors, interests and values, developed in turbulent scenarios where many factors shaped the outcomes apart the project. Those aspects have rarely been addressed by traditional evaluation practice. After many years of focusing on getting credible evaluation results based on rigorous methods, there is a need to capture and map complex systems dynamics and interdependencies. For that, it is necessary to include in the evaluation agenda an emphasis on in-depth comprehension of processes and shared learning in order to deeply understand with all stakeholders (including funders) what gets developed and learned as a consequence of development interventions. Relatively new for the European context, the Systematization Approach can certainly contribute to this aim. The adoption of Systematization approach lies in the idea that experiences must be used to generate understanding and lessons learned can improve ongoing implementation and contribute to a wider body of knowledge. Learning from action does not happen by accident, it needs to be planned for in project design, in staff job requirements, in the cycle of meetings and reflections, in the general project culture. Most of development projects are not designed to be action learning processes. The challenge, therefore, is how to promote, design and conduct learning processes within organizations and project activities that have not been designed with this purpose in mind. This paper discuss the strength, weaknesses and scope of Systematization approach as it is being used in Latin American context. The paper will briefly present Systematization conceptual framework, highlighting the difference from other evaluation approaches and providing the core guiding principles to systematize development experiences. The methodology (six basic steps) will be explained by ilustrating its use in a real case in Argentina. Although the word systematization usually refers to classifying, ordering data and information, as it is being understood in Latin America, it is much more than classifying, ordering and documenting a case. It has to do with producing knowledge from practice. Being considered as a tool that brings participatory research and evaluation into one methodological tool, systematization is one of the learning oriented and formative evaluation perspectives. Being implemented as a multi-stakeholder and participatory process, it allows development organizations not only to improve practice, but also to communicate and disseminate lessons learned. By analyzing one or more cross-cutting themes, such as leadership, participant empowerment, project management or community partnership, it also contributes at explaining the logic of the intervention process, the external and internal factors that influenced it, and why it had the results it did. Systematization is certainly an evaluation approach that contributes to the general body of knowledge in the development field. Keywords: Systematization approach; Shared learning; Multistakeholder evaluation; Formative evaluation; Systemic thinking in evaluation;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

131

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 191

A systems critique of program logic


G. Smith 1
1

Numerical Advantage Pty Ltd, Canberra ACT, Australia

S2-05

Using a systems theory approach, this paper presents some serious methodological problems in the use of program logic. Program logic has long been a cornerstone of evaluation, but its use has been problematical. The logic can be overly complex, not cover all key eventualities, and most significantly, fail to explain the program results all elements of the program logic may be present, but the program still fails to deliver. Why is this so? One reason is that conventional program logic usually has a linear consideration of the chain from resources to activities to outputs and outcomes, and does not take into account the complexity of a networked set of stakeholders. Using systems concepts from complexity theory, chaos theory and complex adaptive systems, the paper points out that the linear deterministic network inherent in a typical logic diagram (formulations such as input to activity to output to outcome to impact) are not rich enough to deal with the program considered as a system. This requires consideration of context, feedback, outside influences and the possibility that the system is adaptive, i.e. the outcome is a goal to aim for, not just a mechanistic consequence that follows from applying the program inputs. However, all is not lost. Systems may be complex but they are often stable. This stability is often achieved through feedback, the very element that conventional program logic finds it hard to deal with, providing control over system behaviour. But if an intervention is trying to facilitate change, stability is not necessarily desirable. Understanding these controls can be very illuminating in understanding what is going on in a complex system and why (sometimes) desirable change does not occur. The paper then concludes with some practical examples of how program logics can be reformulated using systems concepts. Keywords: Program logic; Systems; Complexity theory;

Thursday, 4 October, 2012

1 4 : 0 0 1 5 : 3 0

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

132

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-44 Strand 2

Panel

A European Evaluation Theory Tree


S2-44
O 192

A European Evaluation Theory Tree


N. Stame, H. Simons, P. Dahler-Larsen, B. Perrin The second edition (2012) of the book Evaluation Roots by Marvin Alkin and Christina Christie includes a chapter on A European Evaluation theory Tree by Nicoletta Stame.

Thursday, 4 October, 2012

1 4 : 0 0 1 5 : 3 0

On the occasion of the first edition of the book, a panel was held on Evaluation Roots in the USA and in Europe: Tracing traditions? at the EES London Conference of 2006. It aroused interest in going back to the roots of European evaluation, and debate about the nature of the European evaluation theory, and its links with the institutional context. Answering a request by the authors to identify European evaluation theorists who had contributed originally to the field, Nicoletta has focuses her chapter on illuminative, democratic and personalized evaluation (M. Parlett and D.Hamilton, B. MacDonald, S. Kushner), policy tools and evaluation (E. Vedung), dialogue in evaluation (O. Karlsson), realist evaluation (R. Pawson and N. Tilley), syntheses and the evidence based movement (A. Oakley, M. Petticrew), and on the work of theory-weavers such as E. Stern and E. Monnier. In the panel that is proposed herewith, Nicoletta Stame will act as coordinator, and provide a brief presentation of her chapter. The panelists are invited to comment on that chapter, by discussing on the state and prospects of European evaluation. Possible topics for debate are criteria for inclusion, relevance of the theories examined, their legacy, omissions, directions for future research. This will reveal diversity in European countries research and policy traditions, and insight on their potentialities; moreover, it will provide an assessment of European evaluation contribution to the international evaluation community. The panel is composed of evaluators who have dealt with the topic from different perspectives. Nicoletta Stame, past President of EES, is author of the article to be discussed Helen Simons, a past-president of UKES, was one of the initial evaluation group at the University of East Anglia, which elaborated and explored democratic evaluation. She has used this approach throughout her evaluation practice and has recently revisited that experience. Peter Dahler-Larsen, past President of the EES, has contributed to several international handbooks on evaluation. His most recent book is The Evaluation Society (Stanford University Press 2012). Burt Perrin, past Secretary General of the EES and past Vice President of the International Organisation for Cooperation in Evaluation, is an independent consultant, providing guidance and quality assurance about evaluation methodology to international organisations, governments, and NGOs worldwide.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

133

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S1-17 Strand 1

Paper session

Evaluation networks and knowledge sharing I


S1-17
O 193

A networked approach to building knowledge about evaluation by sharing information about evaluation methods and strategies
P. Rogers 1, S. Hearn 2, C. Sette 3, K. Stevens 4
1 2

Thursday, 4 October, 2012

1 4 : 0 0 1 5 : 3 0

Royal Melbourne Institute for Technology, CIRCLE, Melbourne, Australia Overseas Development Institute, RAPID, London, United Kingdom 3 Bioversity International, Institutional Learning and Change (ILAC) Initiative, Rome, Italy 4 RMIT University, Centre for Applied Social Research, Melbourne, Australia

This paper demonstrates how formal and informal networks of evaluation practitioners and commissioners can build knowledge about effective evaluation practice. It focuses on the challenge of attributing changes in outcomes to particular interventions one of the biggest challenges in designing impact evaluations. Assessing whether a particular evaluation approach will provide credible evidence of attribution is not always straight forward. Some approaches, such as Randomised Control Trials, have been preferred by many organizations. But, as the EES statement makes clear it is not always appropriate to conduct an RCT given the nature of the intervention and the evaluation. Evaluation practitioners and commissioners need support to choose appropriate alternative methods and strategies where necessary and to implement them well. This paper demonstrates a platform for sharing information about evaluation methods, drawing on published and unpublished examples and guidance and ongoing discussion and revision. This paper is an output of the BetterEvaluation initiative which seeks to improve evaluation practice and theory by sharing information on methods through an online platform. It is being developed by an international network of founding partners Royal Melbourne Institute of Technology, Institutional Learning and Change initiative of the Consultative Group on International Agricultural Research, Overseas Development Institute and Pact, with assistance from an increasing number of partners. Initial development has been supported by funding from the Rockefeller Foundation, IFAD (International Fund for Agricultural Development) and Pact. The project shows the value of Web 2.0 technologies to rapidly share examples, document and store discussions, and identify priority areas for future research Keywords: Evaluation methods; Attribution; Evaluation design; Mixed methods; Causal attribution;

O 194

Evaluation of complex Glocalized multi-level projects


R. Sever 1
1

Hebrew University, Jerusalem, Israel

Globalization seems to be reducing diversity in transnational interventions by its homogenizing effect on conceptions and programs. But glocalization where decision-making is both pulled upwards to transnational networks and downwards to regional and local networks is yielding complex multi-level projects, due to its simultaneous promotion of what is, in one sense, a standardized product, for particular markets, in particular flavors (Robertson, 1997) At the same time weve been witnessing a move away from project-oriented interventions to an increasing emphasis on sector wide approaches, on partnership and on ownership (Conlin & Stirrat, 2008). Such interventions are characterized by high levels of internal diversity : many implementers (public, private, third sector), different beneficiaries, different activities of the same beneficiaries, brought together in partnerships, etc. When these holistic interventions are implemented within one level of hierarchy, they contain horizontal diversity. When such integrated programmes are implemented within a system of multi-level governance, vertical diversity exists. What matters in horizontal diversity is synergy; while subsidiarity is what maters in vertical diversity. When attempting to address multilevel interventions, evaluators face challenges such as : How can changes that occurred at the global level be attributed to the effectiveness of any programme, if there were many different programmes, and if programmes were integrated? How can outcomes be attributed to partners at a given level, if many actors were involved in an action? (Stame, 2004). So, many of the interventions are complicated due to vertical and/or horizontal diversity; theyd are often also complex because of high levels of fluidity and uncertainty caused by recursive causality, disproportionate relationships and emergent outcomes. One of the most challenging aspects of complex interventions for evaluators is the notion of emergence meaning that the specific outcomes, and the means to achieve them, emerge during implementation of an intervention (Rogers, 2008) Under these circumstances, we find in recent evaluation literature increased interest in: 1 the importance of projects contexts (Blamey & Mackenzie 2007) and of requirements for communication and dialogue between evaluators and stakeholders, due to the high degree of uncertainty and ambiguity in complex interventions (Abma & Widdershoven, 2008); 2 the demand to move from linear models and positivistic evaluations to qualitative evaluations and non-linear models (Barnes et al., 2003), to evaluate and discover configurations, and to compare similar projects that are implemented in different contexts (Pederson & Rieper 2008). 3 The role of piloting in evaluation (Swanwick, 2007), 4 Cross-disciplinarization (Jacob, 2008). 5 The proper use of programme theory evaluation to identify the interventions particular elements of complication or complexity (Rogers, 2008).

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

134

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

The paper will discuss these challenges, and present several creative notions and attempts to cope with them, such as: Dual Level evaluations (Allen & Black, 2006). Realistic Evaluation (Pawson, 2006). Emergent Evaluation (Mercado-Martinez et al., 2008). Evolving mapping-sentences (Guttman, 1959; Sever 2007b). Multi-level MSC monitoring (Davies & Dart, 2005). LCE loosely coupled evaluation. (Sever, 2007a). Good Future Dialogue (Arnkil et al., 2002). The notion of grades of evidence (Chatterji 2007); etc. Keywords: Glocalized projects; Complex interventions; Evaluation challenges; Creative evaluation;

S1-17
O 195

The use of mobile technology (SMS data + video) in Real Time Evaluation
H. Williams 1
1

World Vision UK, Policy and Programmes, MiltonKeynes, United Kingdom

Thursday, 4 October, 2012

Real Time Evaluation (RTE) The use of mobile technology (SMS survey data + video) for greater speed of information and comprehensive inclusion of evaluation participants

1 4 : 0 0 1 5 : 3 0

Hilary Williams World Vision, Milton Keynes, UK One feature of the networked society is the rapid dissemination in developing contexts of mobile phone technology. This opens up new opportunities for development actors to better understand the impact of its interventions in particular increasing the range of informants and the speed with which feedback can be collected. World Vision has trialled three key technologies: Ushahidi, FrontlineSMS and FieldTask, as part of a Real Time Evaluation of its Horn of Africa Drought Response operation. The aim was to test the process and technology of the noise data collection (phone and skype conversations, twitter and text messages), and gain an understanding of staff and community reactions to this data. The paper explores the results of trialling the following: Community Noise Data Collection: community members interviewed on video using smart-phones, with the assistance of a translator where necessary (into Amharic). Interviews uploaded from the phone to the Smap database using the wifi connection at the office and translated into English subtitles by WV staff. Staff Noise Data Collection: 60 staff in Ethiopia, Tanzania and Kenya were sent a survey by text message using FrontlineSMS. The same staff were also sent a survey monkey questionnaire by email. 35 staff responded to the text survey, four to the email survey. Staff appear to prefer (or are only able) to send information by text message Aggregate and Analyse Staff and Community Noise: Responses from the text survey were aggregated by FrontlineSMS and made downloadable in the form of a spreadsheet for qualitative analysis by the evaluation staff. The messages were also sent from FrontlineSMS to the Ushahidi portal Present Data from all 3 technologies on 1 user interface: The videos and text messages were turned into reports using the Ushahidi portal. This involved geo-tagging the origin of the messages or videos and validating them. Once the reports were approved, they were visible on the portal on either the interactive map or as a list of reports. The result is available to view at this website: http://hoa.smap.com.au/ushahidi/ This trial demonstrated the desire and ability of staff to collect and send information about the emergency via phones from the affected community and in a reliable and fast manner. It also highlights the potential to link these technologies. The paper concludes by suggesting that further development is required to automate integration between FieldTask and Ushahidi and to scale up the platform. It also explores how this technology could be used for a wider set of evaluation purposes. Keywords: Mobile Technology; Speed; Inclusion; Real Time Evaluation; Video;

O 196

Evaluation on EU ICT Innovation Policy with Emphasis on Cloud Computing Programmes


A. L. P. Cheng 1, W. Y. C. Wu 1
1

Chung-Hua Institution for Economic Research, Taipei, Taiwan

Policy of information and communication technology has been critically recognized as core science and technology development in many advanced countries. The importance of ICTs lies in the wide-spread applications and useful penetrations into different industries. ICTs would stimulate uses of production capacity and service innovations during the process of technological diffusion and fusion. Recent development of cloud computing is characterized by the fusion of high-speed network technology, server hardware with open interfaces, open software and open Web 2.0 standards. To promote social cohesion and social inclusion has been important policy goal of major EU ICT Framework Programmes. The development of cloud computing industries exhibits its important evolutionary path of platform network for market, technology, knowledge exchanges, given the more intangible functions of monetary and fiscal policy in the Member States. To maximise the economic and social potential of ICT, the need to develop a cloud computing strategy in highlighted in the Digital Agenda for Europe and in the National Innovation Strategy for USA and China. Introducing cloud computing technology helps firms to avoid large costs on ICT infrastructure, management and storage and time costs of accessing data and applications. Firms pay the usage charge but enjoy a greater scalability and flexibility. These benefits reduce the barriers of starting up a business, especially for the SMEs. The impact of cloud computing on the creation of business has shown in the literature. Etro (2009), using Eurostat, estimated the scenario with introducing cloud computing and suggests the creation of new SMEs will increase over time. Besides, The Centre for Economics and Business Research (2010) predicts that over the period 2010 to 2015, the adoption of cloud computing has potential to create about 2.3 million new jobs and generate over 763 billion of cumulative economic benefit to the largest five European economies (France, Germany, Italy, Spain and the UK). As shown in cloud computing literature, in order for the current cloud computing platform development to be workable and to enable expansion of all common wealth, the EU needs to build up solid governance collaboration agreements, standards for technological applications, and compatible information regulations. This paper highlights and evaluates the dynamism of cloud computing based on the
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

135

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

foresight knowledge and ICT programmes articulated under EU FPs, especially in FP7. The essence lie in that the resulting effects of FP research and development may contribute to make the cloud platform more accessible to all SMEs and potential users, creating better environment and good opportunities for venture activities. This paper begins with the ICT innovation policy path analysis. Then, it is followed by identifying the conditions and processes about reaching the goals of social cohesion and inclusion through the relevant Framework Programmes with resulting implications. Overall, the paper exerts to examine the achievement of ICT innovation policy in terms of the performance of FPs.

S1-17

Keywords: Innovation Policy Evaluation; Cloud Computing; Social Inclusiveness; ICT Penetration; Framework Programmes;

Thursday, 4 October, 2012

1 4 : 0 0 1 5 : 3 0

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

136

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S5-28 Strand 5

Panel

The Role of Philanthropic Foundations S5-28 in Development Evaluation


O 197

The Role of Philanthropic Foundations in Development Evaluation


N. MacPherson 1, R. Singh 2, J. Nelson 3, S. Mistry 4
1

Thursday, 4 October, 2012

1 4 : 0 0 1 5 : 3 0

The Rockefeller Foundation, Evaluation, New York, USA Open Society Foundation (OSF), Evaluation, New York, USA 3 Bill and Melinda Gates Foundation, Strategy Measurement and Evaluation, Seattle Washington, USA 4 Big Lottery Fund UK, Research and Learning, London, United Kingdom
2

Private giving is changing the development landscape. In the wake of the 2008 economic crisis, official aid under the umbrella of the Development Assistance Committee is threatened by unprecedented budget austerity. By contrast, international philanthropy for development is growing. In particular, private foundations have taken the lead in forging international coalitions focused on delivering global public goods and tackling problems without passports. Some of them are benefiting from their domestic activities geared to the increasingly intense social problems faced by advanced market economies. In their international work, the private foundations emphasize networking, innovation and local capacity building. But how effective are they in achieving results? What are the forces shaping their evaluation practices? Are they taking advantage of official aid donors hard won lessons of experience? Do they have new innovative ways of avoiding the pitfalls of the past? What challenges do they face in connecting with the poor and the vulnerable? Do they use metrics that reflect up-to-date conceptions of human well-being? How are they tracking their development impact? Are their evaluations focused on organizational accountability as well as learning? Are their evaluation processes independent? A panel chaired by Nancy MacPherson, Managing Director, Evaluation (Rockefeller Foundation, Jodi Nelson, Director, Strategy, Measurement and Evaluation (Gates Foundation), Ramesh Singh, Director of Learning, Monitoring and Evaluation, Open Society Foundation (OSF) and Sarah Mistry, Head of Research and Learning, Big Lottery Fund (UK) will exchange views on these questions grounded in their first hand experience. They will use a Round Table format so as to fully engage the audience. Keywords: Philanthropy; Development Evaluation; Foundations; Innovation;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

137

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S5-08 Strand 5

Paper session

Evaluation in a European context I


S5-08
O 198

Comparing diversity: comparative study on project selection procedures in six EU Member States
A. Btel 1, J. Kser-Erdtracht 1, C. Rbke 1
1

Ramboll Management Consulting, Economic Policies, Hamburg, Germany

Thursday, 4 October, 2012

1 4 : 0 0 1 5 : 3 0

EU Structural Funds programmes are implemented in a multi-level governance system which is characterized by the principle of shared management in between the Member States and the EU. Consequently the selection procedures of the ca. 350 programmes implementing assistance of the European Regional Development Fund vary largely from region to region and among the 27 Member States. Furthermore, procedures may vary according to policy fields, types, size of projects etc. On behalf of the Directorate Regional Policy, Ramb?ll carried out a study in 2011, assessing effectiveness and efficiency of project selection procedures for three policy themes across 14 programmes in six Member States. The challenge was twofold: a) finding a way how to compare the incomparable, the diversity of design and management of project selection procedure, b) choosing an analytical approach covering this specific part of a programme implementation process. In the end, elements of an implementation process analysis (Fixsen et al., Implementation Research, 2005) formed the core of the analytical approach. It focused at the interaction in between policy (programme) makers, the administrative bodies and the applicants (e.g. enterprises, research institutions, municipalities) in the project selection process. The implementation analysis was carried out at level of calls for proposals implementing the policy themes. We identified a total of 96 relevant calls. This sample was narrowed down to 36 calls, but still varied considerably in terms of volume and type (open or temporary). For obtaining a comparative perspective Ramb?ll standardised the diverse selection procedures identified and constructed a generic model of the procedure. This generic model structured the analysis from data collection to drawing up the conclusions and recommendations. The full picture of diversity (beyond studying programme documents etc.) was captured by involving the views of applicants (online survey and telephone interviews, also collecting data on administrative burdens) and of administrative bodies (workshops validating the call specific process analysis, collecting data on administrative costs and recommendations to improve the procedure). The findings were drawn together in a multi-criteria analysis using criteria relating to basic governance issues e.g. indicators reflecting degree and quality of information and guidance provided to applicants, transparency of decision making, complexity of processes, effectiveness, efficiency. The results were reflected against the specific characteristics of the calls and revealed good, transferable practices. The over-all recommendations for the study are feeding into the exchange of the EU-Commission with the Member States on improving the transparency, effectiveness and efficiency of project selection and implementation procedures. In practical terms it was essential to involve a multi-lingual team, which on one hand is familiar with the diverse procedures applied in the different countries and on the other hand has a common understanding of the subject. Project management and communication within the team is crucial. The study is published at http://ec.europa.eu/regional_policy/information/studies/index_en.cfm#1

O 199

Communicating about the EU remains a challenge. Can an increased focus on ex-ante and on-going evaluation improve chances of success?
M. Kitchener 1
1

Coffey International Development, The Evaluation Partnership, London, United Kingdom

Introduction: Communicating about the EU remains a significant challenge. There is a history of unfavourable environments: comprised of sometimes hostile, but mainly disinterested audiences; fuelled by dissemination of misinformation. Despite the difficulties, the need to communicate remains a European Commission responsibility. Evaluations of the ECs communication process frequently highlight a significant gap between campaign aspirations and campaign outcomes. Evidence from a review of evaluations carried out by the author over the last 5 years suggests that greater involvement of evaluators from the outset may significantly enhance EC campaigns. Objectives: This paper has three objectives: 1. To clarify some of the reasons why EC communication campaigns are continuing to fall short of their objectives; 2. To show how ex-ante evaluation can strengthen communication campaigns by defining: a. Achievable and measurable objectives; b. Better segmentation of audiences; c. Greater understanding of communication channels. 3. To discuss how evaluators can increase understanding of communication outputs and outcomes during and after campaigns. Approach: The author will discuss:

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

138

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

Key reasons why some campaigns fail to meet expectations, including: misunderstandings about the link between policy and communication processes, limited recognition of the power of ex-ante evaluation by both EC clients and their agencies, and the tendency for communication strategy to be based on insufficient and/or untimely evidence. How involving evaluators during the planning stage of communication campaigns can help to improve their effectiveness, for example providing evidence to support objective setting through:

S5-08

Environmental, needs and problem analysis can help to articulate the internal and external context and the campaign zero point. Intervention logic models can articulate a more realistic picture of how the communication actions may support policy Target audience research can identify familiarity and favourability, perceptions and expectation, communication habits and favoured channels, to better prioritise and segment target audiences. Increasing effectiveness and understanding of outputs and outcomes during and after the campaign: social media provide opportunities for two-way exchange with target audiences. Using evaluation data gathering tools to interact with target groups can enhance campaigns by involving audiences in their on-going development, and providing insights to increase the depth of understanding of outcomes and help in the measurement of results and impact.

Thursday, 4 October, 2012

1 4 : 0 0 1 5 : 3 0

Further discussion: Integrating research and evaluation in the planning and on-going feedback aspects of campaigns can significantly enhance what can be achieved. Involvement in the development of communication campaigns increases the potential for greater innovation in the application of standard evaluation tools. To maximise impact, evaluation findings, conclusions and recommendations need to be articulated according to communication norms. Systematic use of ex-ante, on-going and ex post evaluation throughout the communication process holds the key to greater accountability for spending on EU communication This proposed abstract for discussion paper was prepared by Melanie Kitchener. Melanie is a professional evaluator with over 10 years experience in the evaluation of information and communication campaigns. Melanie holds a bachelors degree in French and German, Postgraduate and UK Chartered Institute Diplomas in Marketing, and an MBA from Henley Business School. Keywords: EU Communication; Effectiveness; Research; Evaluation; Accountability;

O 200

Evaluation as a tool of change in education. The case of Poland


H. Mizerek 1
1

University of Warmia and Mazury in Olsztyn, Department of Social Sciences, Olsztyn, Poland

In 2009 Poland introduced a new system of school inspection. Evaluation, which was designed within it, is a fundamental instrument for quality assurance of schools and other educational institutions. The introduction of the evaluation to the school inspection system has raised many hopes and expectations for a radical improvement of the quality of the current national education system and the individual institutions. At the same time, introducing the new system has created a series of challenges. The purpose of this presentation is to indicate some of the issues that have arisen during the course of implementing the new system. They refer to the following; tension between bureaucratic evaluation and democratic evaluation, possibilities of transition from accountability evaluation to knowledge evaluation as well as developmental models of educational evaluation, questions regarding the relationship between external evaluation and self-evaluation, conditions under which the evaluation provides material for reflection, debate as well as encourages the development-oriented action and decision making. Keywords: Evaluation in education; Quality; Evaluation models; School inspection; Policy evaluation;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

139

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S1-22 Strand 1

Panel

Sharing information in the networked society S1-12 options and requirements for the publication of (evaluation) results
O 201

Thursday, 4 October, 2012

1 4 : 0 0 1 5 : 3 0

Sharing information in the networked society options and requirements for the publication of (evaluation) results
B. Befani, B. Neuhaus 1, G. Lisack 2, K. Lth 1, H. Milet 3
1 2 3

evalux, Berlin, Germany Universit Paris-Sorbonne-Paris IV, Paris, France Ville de Grenoble, Mission Evaluation des Politiques Publiques, Grenoble, France

The handling of information (esp. of results and recommendations) that have been generated in the course of evaluations is often object to discussion. In todays medialised societies the dissemination of evaluation results is faced with new opportunities and challenges: With the internet and social networks it was never easier to get in contact with more people. But at the same time so much information competes for attention as never before. Evaluators and stakeholders have to consider and decide wisely, how they can benefit of these new possibilities and what they have to do in order to avoid unnecessary conflicts and mistrust. Who is allowed to and who will in the end gain insight into this information, either fully or partially? Which (other) groups will be interested in the results? And who will work with the results and implement recommendations? Especially in multi-stakeholder settings, these and further questions have to be clarified with care and thought. Often a large amount of stakeholder have to be considered in the evaluation design and in many cases, it is not possible to communication with all actors. The presenters will first introduce some fundamental considerations on major practical, methodological, political as well as ethical requirements to be fulfilled regarding the publication of information and results throughout evaluation processes. In doing so, they will refer to several sources, i.e. national and international standards and codices of different disciplines, different evaluation approaches, research on evaluation practice, own evaluation experiences. Secondly, based on the above mentioned considerations, the presenters will introduce a set of guiding principles for the publication of (evaluation) results in different evaluation settings. The presentation (1520 minutes) will provide the framework for 2 rejoinders (i.e. one client and one researcher with different cultural backgrounds will be early involved in the preparations) and the succeeding discussion. The objectives of the roundtable session are to: share/contrast particularities of different fields/sectors and (evaluation) approaches, integrate different individual experiences and observation in the discussion, sensitize the participants for the (possible) roles and respective responsibilities of evaluators, identify and contrast cultural characteristics and differences, inspire and enable the participants to make well-founded future decisions, encourage and widen the publication of evaluation results, show the technical possibilities and frontiers and, in consequence, to contribute to the development of (good) evaluation practise. Keywords: Information management; Transparency; Dissemination of evaluation results; Professional ethics; Protection of data;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

140

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S3-05 Strand 3

Paper session

Equity, Empowerment and Ethics


S3-05
O 202

Use of equity based evaluations as means to reach the worst off populations in development programmes: UNICEF Romania case
M. Magheru 1, M. S. Stanculescu 2
1 2

UNICEF, Programme, Bucharest, Romania CERME, Executive Director, Bucharest, Romania

Thursday, 4 October, 2012

1 4 : 0 0 1 5 : 3 0

In 2010 UNICEF has embarked in a global initiative which consists in adaptation of policies, programmes and projects in order to be more equity-based. This is not a new approach for UNICEF but rather a refocus of all interventions in order to ensure they effectively reach the most in need children. It has impact on i) knowledge related to who and where the worst-off groups are, ii) identification of appropriate monitoring and evaluation (M&E) mechanisms, iii) ensuring these M&E mechanisms are compliant with and respond to mixed and complex needs. The equity-based evaluation and lessons learnt presented within this paper refer to one of the projects implemented by UNICEF Romania during 20112012 timeframe and evaluated at mid-term with the purpose to input two strategic levels of decision: Make effective use of the findings related to projects relevance, effectiveness and efficiency during the first year in order to reshape the project during the second year of implementation. Make effective use of the knowledge generated by the on-the-ground experience in order to reshape the equity focus of social policy. This second dimension relies on main findings in terms of project potential impact and sustainability. This paper tackles a third level of decision, forged on the idea that, beyond projects results, the evaluation brings consistent information in relation to how the M&E processes associated to project may strengthen the equity focus of social policy. This is presented from both the commissioner of the evaluation (UNICEF) and the evaluator (CERME see biography) perspectives, providing though valuable insights and lessons learnt by each of them. In terms of methodology, the equity-based evaluation relies on complex design of methods and tools implemented and used from the inception phase of the project. It uses also genuine wide range of quantitative and qualitative data and information which gave a robust foundation for methodological approach during the stages of equity-based evaluation. In addition, it took a 360 look at projects stakeholders both form horizontal and vertical perspectives. As results, the equity-based evaluation has generated confidence to use findings and knowledge to reinforce the further equity focus of the intervention. The project design has evolved to take further into consideration the equity dimension based on evidence generated by the evaluation on the worst-off groups. The stake of this paper can be summarized as follows: show how findings of an equity focus of the evaluation strengthened project design to address more specifically the needs of the worst-off groups. These findings are also relevant for the evaluation community as a whole, promoting the use of equity-based evaluation within any development programmes. Manuela Sofia Stanculescu is scientific senior researcher at Research Institute for the Quality of Life of Romanian Academy. She is Sociology lecturer at University of Bucharest, and Executive Director of the Romanian Centre for Economic Modelling. Mihai Magheru is Programme Officer at UNICEF Romania and has 12 years professional experience in social protection. He has a Master Research in Sociology of Education at V.Segalen University of Bordeaux, valedictorian in 2004. Keywords: Equity; Worst-off groups; Human and childrens rights;

O 203

EVALUATOR theory in evaluation ethics


G. Tharanga 1
1

United Nations Population Fund, Monitoring and Evaluation, Colombo 07, Sri Lanka

EVALUATOR theory in evaluation ethics Worldwide, there is growing trend towards professionalization in evaluation. To be professional implies an allegiance to, and a performance of duties in compliance with, stated norms and ethics. Further competencies combined with ethics, norms and standards; provide the basis for professional credentials. Norms and standards for evaluation have been developed by evaluation associations. In an evaluation, evaluators have many tasks, including planning, organizing and designing evaluations and collecting, analyzing and presenting data. They also have to deal with internal and external pressure. They may be asked to make changes to the plan, organization, or reporting of the evaluation of the needs of others. Sometimes proposed modifications are welcome; at other times they may raise ethical or political considerations. Ethics and politics are issues for all evaluators. (Linda and Ray 2009) When ethical issues arise, programme staff and stakeholders need to acknowledge them and discuss them with interested parties to reach a resolution. Program managers and M&E specialists should develop a strong working relationship with project staff to discuss M&E ethical issues openly and honestly. In some instances, it may be appropriate to involve community members in resolving ethical challenges. Local residents can often provide valuable insights into devising a culturally appropriate solution. I hope to present series of ethical values in an abstract manner to guide M&E professionals. Although many guidelines and strategies available on explaining evaluation values it is a required to bring a abstract way of presenting them to evaluation professionals and those who interested in commissioning or managing evaluations. These EVALUATOR values are intended to stimulate discussion among M&E professionals and can actively guide M&E design and implementation, not just support problem-solving efforts. Furthermore, this
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

141

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

presentation is intended to promote better practice in evaluations, and seek to inform both those who commission evaluation research and those who carry it out. The EVALUATOR values are given below. E V A L U T O R Evidence Validity Accuracy Learn Unbias Accountability Omissions and wrongdoing Rights

S3-05

Definitios extracted from Oxford Dictionary, Oxford, UK

Thursday, 4 October, 2012

1 4 : 0 0 1 5 : 3 0

Keywords: Evaluation ethics;

O 204

Conducting multisite community empowerment program evaluation: challenges and lessons from an experience in Quebec
M. Alain 1, S. Hamel 1
1

Universit du Qubec a Trois-Rivieres, Department of Psychoeducation, Qubec, Canada

Collaborative/empowerment program evaluation remains mostly done in single ventures, where one evaluation is tailored to the specificities of one program in one community. On the other hand, multisite evaluations are mostly restricted to the more traditional aspects of program evaluation, such as accountability, accuracy and measurable changes. As such, stake holders interested in the evaluation processes are essentially limited to the formal agencies responsible of managing and paying for the programs implemented in different communities, while these same communities are too often unaware of the results produced by the evaluation. Conducting multisite evaluation on the basis of community empowerment still remains a very rarely attempted endeavour. It asks for the evaluation team the ability to respond for considerable methodological challenges, such as ensuring tailor-made responses to local stake-holders questions as well as maintaining general standards throughout the process. The dual perspective that ensures micro responses while guarantying a macro-sociological inquiry has been the basic objective of a multisite evaluation research conducted in Quebec (Canada) among 16 communities during two and a half years. These communities, funded by Quebecs ministry of public safety, had to create collective prevention networks against youth sexual exploitation. The evaluation research not only produced results tailored to each and every participating communities but it also produced a creative virtual network where communities can learn from one another. On the other hand, the funding agency, in this specific case, Quebecs ministry of public safety, was able to witness throughout the project, how diverse were the different responses for preventing youth sexual exploitation proposed by funded communities. The ministry has also been able to establish different levels and reasons of success, counting on the constant monitoring assured by the evaluation team in full respect of participating communities, guarantying their confidentiality. If the first steps of the two years evaluation were essentially driven by content analysis of projects documents proposed by the participating communities, as well as open interviews with local stake holders, the evaluation gradually adopted a more traditional quantitative approach in order to monitor progresses and first results. One of the key challenges posed by such a methodological approach has been to maintain and, in some instances, even raise the cooperative climate created between local stake holders and the evaluation team while accumulating data in order for the funding agency to access the global perspective of how the initiative was flourishing through the whole territory. By the end of the first two years of the project, more than 15 000 youth potentially at risk have been reached and more than two third of the financed initiatives have been able to maintain their efforts even though the initial funding has ceased. This unique empowerment venture will be described to the participants, from a methodological perspective as well as from the lessons learned through it.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

142

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-14 Strand 2

Paper session

Evidence based policy and programs


S2-14
O 205

Strengthening the use of theories of change approaches in international development


I. Vogel 1
1

Independent Consultant, Eastbourne, United Kingdom

Thursday, 4 October, 2012

1 4 : 0 0 1 5 : 3 0

Rationale: The last five years have seen a significant increase in popularity of Theory of change (ToC) as an approach. It is being widely adopted amongst international development bilateral and multilateral donor agencies, governmental, non-governmental and civil society organisations around the world. ToC is viewed as offering significant potential to strengthen development impact through improving design, implementation and evaluation of development programmes. Different aspects of the approach are emphasised in different settings. Many people perceive that the principal value lies in the way the ToC process encourages open, critical thinking about context, complexity and change; the unpicking of assumptions about cause, effect and pathways of change; and the on-going monitoring, learning and evaluation that the ToC framework can support. Given the rapid take-up of ToC by the international development community, questions being asked across the sector include: Amongst the range of practice, what consensus is emerging on what makes a good quality theory of change? What does ToC as an approach bring beyond the established logical framework? How can the deeper analysis of change processes and causal pathways that ToC encourages be embedded more systematically, but without being seen as another management fad and additional layer of donor compliance? Objectives sought: In response to the questions outlined, this paper discusses the findings of a DFID-commissioned a review on how theory of change is being used in international development. The objectives sought from the conference are to further inform the use of theory of change as an approach through a positive contribution to debates in the evaluation community. Narrative: The review was carried out in early 2012. The objectives of the review are to contribute to enhanced clarity and consistency in the use of ToC approaches by learning about areas of debate, consensus and innovation arising from their current use in international development. Encompassing a wide range of international development agencies, organisations and practitioners, as well as the debates in the literature, the review attempts to shed light on the questions of quality, additional benefits and embedding of the ToC approach. Justification: The findings of the review are of interest and benefit to the international evaluation community who are currently using ToC, and those who may do so in the future to strengthen the design, implementation, monitoring and especially the evaluation of their interventions. This includes a wide range of organisations bilateral and multilateral donors, NGOs and consultants. Strengthening understanding about how to work effectively with ToC approaches can contribute to better design of interventions, more robust evaluation efforts and clearer learning about how to support improvements in the lives of people living in developing countries. Keywords: Theory of change; Programme logic; International development;

O 206

Evaluating policy influencing strategies and success using a theory of change approach
Z. Ofir 1
1

International Evaluation Advisor, Geneva, Switzerland

When, why and how research and other forms of evidence wield influence has been considered in terms of various models that span a spectrum. At the one end are those that regard decision-makers as competent and open-minded rational actors seeking systematic evidence, considering various options and trusting those providing the evidence. At the other end is an acknowledgment that they face significant constraints and more often than not muddle through. Here, scientific information is only one element of a broad, open-ended, mistake-making social or iterative process, both cognitive and political, with policy choices made in a largely unpredictable manner when streams of information and possible solutions infuse an issue domain. Special interests, manipulation and processes based on power asymmetries aimed at gaining advantage often prevail. Recent extensive studies by ODI, IDRC and others reinforce the fact that policy or strategy influencing is complex, context-sensitive and largely unpredictable. This poses particular challenges to evaluation, as well as to researchers, evaluators and others who wish to increase the chance that their work is used, thus enhancing the potential for influence. Critical questions persist. How can influencing processes be evaluated in a useful, convincing manner? Can evaluation methodologies be used to better identify a set of characteristics or potential success factors that can (routinely) be used to help improve the design and execution of evidence-based interventions? Or to highlight possible tipping points for influence? The paper will discuss one evaluation approach that was used to try to answer these questions. This IDRC funded evaluation was based on four case studies of the policy influencing efforts by a regional research think-tank working in 11 countries in Asia. The methodology
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

143

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

combined the retrospective development of a generic theory of change for the influencing work of the think-tank, with process tracing, contribution analysis and multi-faceted triangulation in order to test whether the theory held true across the cases. Analyses focused on identifying specific causal pathways where the logic either held up or broke down, the reasons, and the implications for the identification of success factors for influence. In the process, current knowledge and models in this field in the literature were also used to support and illuminate findings.

S2-14

A South African with a PhD in Chemistry, Zenda Ofir is an international evaluator working across Africa and Asia. A former AfrEA President, IOCE Vice-President, and AEA Board and NONIE Steering Committee member, she conducts evaluations, facilitates the development of useful M&E systems, and provides evaluation advice to international organisations such as GAVI, the CGIAR, several UN agencies and the Rockefeller Foundation. Keywords: Policy influence; Evaluating policy influencing; Process tracing; Contribution analysis; Theory of change testing;

O 207

Thursday, 4 October, 2012

1 4 : 0 0 1 5 : 3 0

Meta-evidence an experimental study


T. Widmer 1, C. Stadter 1, K. Frey 1
1

University of Zurich, Zurich, Switzerland

Background: For some time, concepts of evidence-based decision making have been spreading from medicine to other fields, including areas concerned with the provision of social services such as social work or education. In the meantime there has also been increased use of these concepts in relation to public policy and evidence-based policy making has become an increasingly common notion. Objectives: This study has the goal of testing the causal assumptions underlying the evidence-based policy-making approach. It examines what the effects of evidence are on knowledge and attitudes and whether these effects vary with different policies. Methods: The experimental study has a repeated measures crossover design with two separate interventions using randomized assignment. The data from 364 observations is analyzed with ordered logit regressions. Results: Clear-cut effects of the interventions on knowledge can be demonstrated for both interventions. However, no significant effect on attitudes was witnessed in either case. Conclusion/Application to practice: The study shows that the transfer of concepts of evidence-based decision making to politics is additionally problematic because in contrast to professional practices political decisions are based on values. Keywords: Evidence-based policy-making; Experiment; Evaluation utilization;

O 208

Modeling Evidence Based Policy Making in a Quebec Health Program


P. Smits 1, J. L. Denis 1, M. F. Duranceau 1
1

ENAP-Universit de Montral, Montral (Qubec), Canada

Health Impact Assessment (HIA) is a evaluation procedure to ensure that all levels of government consider the potential impact of their decisions on the health and well-being of the population. We conducted an evaluation on the impact that HIA, and other health oriented practices of the Quebec government, have on Evidence-Based Policy (EBP) decision-making within ministries. We used multiple cases studies, runned semi-structured interviews and collected documentation. The analysis is based on Riepers model of EBP decision making processes using evaluations as evidence. Results will highlight five mecanisms occuring (and not) in ministries: producing accessible and timely evidence, disseminating to key stakeholders, bringing to the attention of politicians and support staff, significance attached to evaluation evidence, use of evidence in discussions and debates. The findings will also emphasize the importance of the coordination unit and network in the development of HIA and in reinforcing cooperating ministries EBP decision-making. Keywords: Health impact evaluation; Evidence based decision making; Network; Public policy;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

144

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S4-27 Strand 4

Paper session

Evaluation of research programmes


S4-27
O 209

Theory-based Impact Evaluation and Research and Development activities. The Italian case
R. Lumino 1
1

Universit? Federico II of Naples, Dipartimento di Sociologia Gino Germani, Naples, Italy

Thursday, 4 October, 2012

1 4 : 0 0 1 5 : 3 0

The paper focuses on theoretical and empirical issues in the evaluations of the effects of public policies in Italian case, putting emphasis on Research and Development activities (R&D). Our aim is to show utility in combined using of social networks analysis and theory-based impact evaluation (TBIE), proceeding from a small case study, regarding the policy to create technological districts in Italy. In the last times, the primary purpose of most social and economic programmes financed by public agencies is to change the relationships among groups and organizations, both public or privates ones, in a decentralized model of governance. So the relationships have been became important for evaluating programs; in this sense social network analysis (SNA) can be used as an important tool to identify what the network is, how it operates, and how it affects program outcomes. SNA opens the black box of a programs processes, pro-viding a qualitative and quantitative assessment of net-work relationships. So it can be useful with a TBIE approach based, especially for the application in a development interventions. TBIE addresses the evaluation design not only to the question of what works?, but also why? and under what circumstances?. It helps knowing not only what the outcomes of a program are but also how and why those outcomes appear or fail to appear. It requires surfacing the assumptions on which the program is based in considerable detail: what activities are being conducted, what effect each particular activity will have, what the program does next, what the expected response is, what happens next, and so on, to the expected outcomes. In this paper, we aim at analysing public funding of R&D activities regards public funding of R&D in the business sector through direct subsidies or fiscal measures, with application to the Italian case. Traditionally the evaluation criterions of the success of an R&D program are based on the additionality concept, focusing on input or output additionality. These are both interesting questions but in neither case is causality examined, nor is there an explicit or implicit model of how the firm uses public support. We want to understand the impact of R&D interventations extending additionality research on some indicators relying on the concept of behavoiral additionality, and especially network additionality, related by the type and the strenght of relationhips that the interventations had created, the implication for the relationship for the local contects, and so on. Keywords: Theory-based impact evaluation; Research and Development activities; Social network analysis; Network additionality;

O 210

Towards a European Research Area for Sustainable Development? Monitoring integration effects of the 7th EU Framework Programme
A. Martinuzzi 1, M. Hametner 1
1

Vienna University of Economics and Business, Resarch Institute for Managing Sustainability, Vienna, Austria

Monitoring and evaluation of R&D programmes is a challenging task: while the theory of innovation systems became more and more complex, decision makers are no longer just focussing on proper spending of research funds and high scientific impacts but ask for evaluating the impacts on the economy, on the environment and on society. Especially when in comes to assessing the contributions of R&D programmes to sustainable development, the broad variety of definitions, the fuzziness of interrelated objectives and the complexity of diffusion and dissemination mechanisms have to be considered. Measuring the impacts of R&D programmes by looking on sustainable development indicator sets would not show any effect at all, while scientific output indicators, would not allow any monitoring of effects on the economy, on the environment and on society. Monitoring the 7th EU Framework Programme poses an additional challenge: with a total budget of more than 50 billion Euro, a duration of seven years, a broad variety of themes and several thousands of research topics and projects the programme is just huge. Monitoring and evaluating each research project or new technology would therefore not be feasible. In order to deal with these challenges we developed and implemented a monitoring system based on qualitative estimations of expected impacts. It combines a scientific evidence-based screening by a group of experienced researchers with an external expert validation. For estimating the expected impacts we used the EU Sustainable Development Strategy, which we transferred into an easy-to-use referential framework. In order to make the results of the monitoring system available to the public and to stimulate a public debate on particular issues, a public platform has been set up at www.FP7-4-SD.eu including an interactive database and allowing the analysis of the monitoring data from various points of view. The monitoring system allows to identify research topics, projects and partners that contribute to achieving each single objective of the EU Sustainable Development Strategy. It links policy objectives with research activities, and integrates structural information as well. Through Social Network Analyses we assessed in which thematic areas European research networks have already emerged and in which areas interventions are recommended to foster a European research area. In our presentation we will discuss the challenges of setting up a monitoring system for the 7th EU Framework Programme, describe its key features and present selected results. Andr Martinuzzi (andre.martinuzzi@wu.ac.at) is director of the Research Institute for Managing Sustainability at the Vienna University of Economics and Business. He co-ordinated the EU-funded EASY ECO Evaluation of Sustainability programme (20022010) and several evaluation projects on national and international level. Research areas: corporate sustainability, sustainable development policies, evaluation research.
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

145

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

Markus Hametner (markus.hametner@wu.ac.at) is research fellow and project manager at the Research Institute for Managing Sustainability. He holds a Masters degree in Ecology with a specialisation on Environmental Economics and Management. Research areas: monitoring and evaluation, sustainability indicators and corporate sustainability. Keywords: Monitoring System; RTD Programme Evaluation; Sustainable Development; Social Network Analysis; European Research Area;

S4-27

O 211

The use of peer review during the development of research and innovation strategies by European regions
R. Rakhmatullin 1, I. Midtkandal 1, A. G. Mariussen 2, 3
1 2 3

European Commission (DG-JRC), Institute for Prospective Technological Studies (IPTS), Seville, Spain Leader, BA Institute, Vaasan, Yliopisto Senior Researcher, Nordland Research Institute, Bodo

Thursday, 4 October, 2012

1 4 : 0 0 1 5 : 3 0

In June 2011, the European Commission launched the smart specialisation platform to support regions and Member States in developing their research and innovation (R&I) strategies. This new facility is there to help European regions to define their R&I strategies based on the principle of smart specialisation. This platform (S3 Platform) assists regional authorities to design smart specialisation strategies (S3). This is seen as the next logical step to reaching the goals set by the European Union in the field of research and innovation (Europe 2020 strategy). The S3 concept suggests that each region can identify its strongest assets and R&I potential so that it can then focus its efforts and resources on a limited number of priorities where it can really develop excellence. The regions are then expected to build on their competitive advantage and to compete in the global economy. However, not every region is equally successful in developing an original regional innovation strategy for smart specialisation (RIS3). Some regions are struggling to focus on clear priorities, while others tend to reproduce other regions strategies. This is where the S3 Platform is able to provide direct assistance to regions and Member States in developing, implementing and monitoring smart specialisation strategies by providing feedback and information to Member States and regions. Among other instruments, the S3 Platform team has implemented a region-by-region peer review methodology to assess the smart specialisation strategies drafted by regions. During a peer review exercise, an EU region presents its RIS3 strategy for examination by peer regions. The peer regions are involved as equals and act as this regions critical friends. Such a peer review exercise allows regions under review to examine their RIS3 strategy from the perspectives of other regions with an ultimate goal to improve its policymaking, employ best practices and follow verified standards in the R&I policy area. The outcomes of the peer review exercise will then be used to improve regional R&I policy. While the European Commission uses peer reviews as a tool at a Member State level in a number of policy areas (the Open Method of Co-ordination) for some time now, it still appears to be an under-studied phenomenon in the context of smart specialisation strategies within the context of EU regional policy-making. This research paper will attempt to address this gap and will evaluate the role of the peer review exercise in the process of the development of RIS3 strategies by European regions. Keywords: Smart Specialisation; RIS3; Peer review; European region;

O 212

Different approaches to the evaluation of effects of basic research Results from a pilot project
B. Sandberg 1, S. Bylin 2, E. Mineur 1, S. Sderberg 1, P. Jansson 1
1 2

Swedish Research Council, Evaluation Unit, Stockholm, Sweden Swedish Research Council, Department of research funding, Stockholm, Sweden

The Swedish Research Council is a government agency that provides funding for basic research of the highest scientific quality in all disciplinary domains. Part of the Swedish Research Councils remit is to evaluate research and assess its scientific quality and significance. In general, there is an increased focus on describing the effects of research funding in Sweden and several other European countries. The Swedish government (in its latest Research and Innovation Bill) urges the Swedish Research Council to put focus on analyzing effects of basic research (Prop 2008/09:50, 21). This has led to an increased focus on evaluating not only the scientific quality of research but also to assess its societal impact. This paper, presents the results from a pilot project aiming to develop methods valid to identify and describe effects of basic research activities. The project focusing on testing three different evaluation approaches in order to trace effects of basic research activities within the research area of criminology. More specifically, the focus here is on the effects of research activities within a single research program that was initiated in 1994. Foremost, the study aims to test and evaluate three different methodological approaches to describe effects of basic research from a single research program. It has been performed by the Evaluation Unit at the Swedish Research Council. In the first study, the methodological approach has been to use a program theoretical approach to the research program in order to compare the logical framework of the initiative with implementation and outcomes of the program as perceived by different stakeholders within the criminal justice policy and research field. In the second study, the methodological approach has been to follow a strategic sample of individual research projects funded by the research program and map out what impact they have had on policies, guidelines, decision-making etc. This has been done by interviewing research project leaders and (possible) users of the research results within the criminal justice system. Finally, the third study analyzed to what extent criminological research are referred to by the involved policy makers in the political decision making procedures in two criminal policy debates, as expressed in the policy preparatory work in the official documentation from the Swedish parliament. The paper starts by problematizing the concept of effects of basic research followed by a description of the three different studies and their results. Finally the paper concludes and synthesizes the experiences from using the different methodological approaches and discusses their validity in a basic research context. The main conclusion of the study is that there is a need to apply method triangulation to identify and analyze effects of basic research. Keywords: Effect; Basic research; Method triangulation; Method development;
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

146

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S4-29 Strand 4

Panel

Using research to rethink programme implementation: S4-29 The Health Systems Strengthening Experience in Nigeria
O 213

Thursday, 4 October, 2012

1 4 : 0 0 1 5 : 3 0

Using research to rethink programme implementation: The Health Systems Strengthening Experience in Nigeria
M. O. Ojukwu 1
1

ActionAid, Right to Health, ABUJA, Nigeria

The recognition that there is an apparent need for communities to involve in the health systems strengthening intervention as mobilizing agents for access to quality health care in Nigeria. However, little is known about the existence and the success rate of the Ward Health Development Committees (WHDCs) operations to inform program intervention to increase uptake of health services at the Primary Health Care Centers through advocacy, social mobilization and social audit/accountability. Prior to Global Fund support in Nigeria, there is a general lack of information on the operations of WHDCs, more appalling, most communities do not have such community systems and structures for promoting health services and community participation. This has further weakened the general health systems in Nigeria as there exist huge disconnect between health systems strengthening policies and its operationalization. Closely linked is compelling need to assess the governance and leadership of community systems and structures in tracking WHDCs community mobilization actions and participation in health as supported by civil society organizations (CSO). A year into the intervention, an operations research to generate data for specific and more accurate evidence for planning programme interventions that are responsive to the peoples health needs and provide baseline data that can subsequently be used to monitor and evaluate the Health Systems Strengthening (HSS) interventions value and/or impact was conducted. Information on roles and relationship of WHDCs as the link between the community and PHC services are necessary in operationalizing the HSS framework and as much as establishing baselines within the intervention for tracking the result indicators. The purpose of this research was to ascertain, the mobilization values and effects of the WHDCs in their communities towards the uptake of health services at the PHCs. By evaluating the effectiveness of the roles and relationship of WHDCs as mobilizing agents toward community participation in healthcare delivery and services uptake, this study provided important data as well as methodological information for the HSS interventions especially on community system strengthening. Firstly, this study unlocked knowledge and brought into fore uncharted and yet to be queried community perception on primary health care services and the roles of WHDCs. Secondly, it provided useful information on the functionality of the WHDCs in bridging the gaps between the PHCs and the communities and creating the links for mutual benefiting relationship between the PHCs and the communities for sustainable health outcomes. Thirdly, the study revealed the capacity level required to link community for participation in health services uptake particularly among rural communities. Besides, the research responded to the pressing need for specific and more accurate evidence-based HSS program planning and interventions. The operations research therefore responded to the overall indicator targeting governance and leadership, which is; to determine the number of Ward WHDC mobilizing community participation on Health and supported by CSO networks. This provided evidence for strengthening systems of data collection for evidence-based programming and good accountability to the project stakeholders at all levels in Nigeria. Keywords: Operations Research; Health Systems Strengthening (HSS); Ward Health Development Committees (WHDCs); Evidence-based programming; Social accountability;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

147

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S3-17 Strand 3

Paper session

Evaluation and gender mainstreaming


S3-17
O 214

Why is it so difficult to see results from gender mainstreaming initiatives? Experiences from assessing gender mainstreaming initiatives in Sweden.
A. C. Callerstig 1, K. Lindholm 2
1 2

Linkping University, Thematic studies Gender studies, Linkoping, Sweden Stockholm University, Centre for Organizational research, Stockholm, Sweden

Thursday, 4 October, 2012

1 4 : 0 0 1 5 : 3 0

The implementation problems of gender mainstreaming initiatives show many similarities across organizations and societies (Moser 2005, Walby 2005). Even in Sweden which could be argued to be a most likely case results have been scarce (Sainsbury and Bergqvist 2009). At the same time, traditional models of planning and evaluation are questioned in relation to how they manage to account for and explain unexpected and sometimes unwanted effects. It has been argued that unexpected events and the irregularity of processes have been taken too little into account in evaluation efforts (Taleb 2007). Previous studies on changes towards gender equality in organizations, have highlighted both the impact of irregularity and the practices of individual so called tempered radicals meaning persons who understand and exploit possibilities and encounter treats as they arise (Meyerson and Scully 1995, Meyerson 2001). But how might the impact of such shifting and contextual circumstances and individual responses to these unpredictable occurrences be encountered for in the evaluation of gender equality initiatives? In the paper the limits and possibilities of standard models for planning and evaluation of gender mainstreaming initiatives, are problematized with special focus on the impact of the individual or the human factor. The paper builds on results from an ongoing evaluation project of a Swedish government programme on gender mainstreaming in local and regional authorities 20092013 (Lindholm ed.2011). Assessing the impact from gender mainstreaming initiatives is important in order to develop future policies, but is traditional evaluation models suitable for such tasks? In this paper the complexity of evaluations of gender mainstreaming initiatives will be discussed with focus on the impact of individuals on the results from such initiatives in order to explain un-expected results from gender mainstreaming initiatives. The paper seeks out to problematize how the impact of individuals on the implementation can be understood. In gender equality projects participants have often been understood as either grim resisters or passive implementers with negative impact on the projects in terms of learning and the development of sustainable change processes (Howard 2002). In the paper the concept of resistance is explored as an interactive process (Swan and Fox 2010). The main argument in the paper is that both practices that entail resistance towards gender equality objectives and practices that entail resistance towards gender blindness in organizations are important in order to explain outcomes. References: Howard, Patricia. L (2002) Beyond the grim resisters: towards more effective gender mainstreaming through stakeholder participation, Development in Practice, Volume 12, Number 2, May 2002 Lindholm, Kristina (red) 2011, Jmstlldhet som verksamhetsutveckling, Studentlitteratur Meyerson, Debra, Scully, Maureen (1995) Tempered Radicalism and the Politics of Ambivalence and Change, Organisational Science 6, no. 5 (1995): 585601 Meyerson, Debra (2001), Tempered Radicals: How People Use Difference to Inspire Change at Work, Harvard Business School Press, 2001 Moser, Caroline and Moser, Annalise(2005), Gender Mainstreaming since Beijing: a review of success and limitations in international institutions, Gender and Development Vol. 13 No 2, July 2005 Sainsbury, Diane and Bergqvist, Christina (2009), The promise and pitfalls of Gender Mainstreaming, International Feminist Journal of Politics, 11:2, June 2009, 216234 Swan, Elaine och Fox, Steve (2010) Playing the game: Strategies of resistance and co-optation in diversity work. Gender, Work and Organization, (17) 5, s. 567589. Taleb, Nassim Nicholas, (2007), The black swan: The impact of the Highly Improbale. The Radom house Walby, Sylvia (2005) Gender mainstreaming. Productive tensions in theory and practice. Social Politics, 12 (3), s. 321343.

O 215

Is the economic case an alternative to the rights-based approaches to forward the gender equality agenda? Answers from the evaluation field
P. Alvarez 1
1

INFOPOLIS, Evaluation, Bilbao, Spain

Gender equality remains elusive as a concept when it comes to measure its impact and/or its benefits. Generally speaking, policy-makers dont seem to fully grasp the rationale and need for some of the gender equality policies in place nowadays. On the other hand, feminist groups, gender equality practitioners, and evaluators consistently claim that current efforts to move the gender equality agenda forward are inefficient, among other reasons, because of the lack of political will for a full-fledged implementation of standards practices. Furthermore, the impact of gender mainstreaming as a successful strategy for the achievement of gender equality remains unclear as far as the benefits it yields is concerned and as far as to what extent it is implemented in a systematic manner.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

148

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

From a rights-based approach, gender equality is a value in itself and women rights underpinning equality policies provide enough legitimation to require further rationale and quantification of benefits. Conversely, from a market-oriented perspective, the value of gender equalityand even the priceneeds to be unpacked so that gender equality issues can be included as a key element of the political agenda. In a context of an ongoing debate about societal models that is heavily influenced by the current discussions on economic growth and austerity measures, dimensions of social welfare and economic survival appear to be at odds when discussing gender equality.

S3-17

This paper is based on a metaanalysis of evaluations focussing on gender equality policies and programmes or, alternatively, on the effect of sectoral policies and programmes in the status of gender equality. Evaluations provide an unconventional lens to examine the normative and theoretical frameworks within which gender equality results are measures. Additionally, this paper examines recent evaluative research aiming at measuring the impact of the economic crisis on women and on the overall situation of inequality between women and men.

O 216

Thursday, 4 October, 2012

1 4 : 0 0 1 5 : 3 0

Evaluating Gender Equality Policies in Portuguese Municipalities: Looking at two experiences


P. Teixeira 1, P. Antunes 1, S. Monteiro 1
1

Logframe Consultoria e Forma??o Lda, Lisboa, Portugal

Bio of Presenter: Owner of Logframe a consultancy and professional training firm based in Lisbon. Collaborates with several Universities, coordinated the European Anti-Poverty Network Lisbon office, member of the European Evaluation Society Board and founding member of the Portuguese Evaluators Association. Hes co-author of three books on Planning and Evaluation. Gender Equality, gender equality policies and programmes emerged as a top priority for Portuguese governments in later years. Successive governments changed labour and employment laws, implemented all sorts of national programmes and even created a State Institute only to study and deal with gender equality issues in Portugal. With all this instruments in place, the subject of gender equality gained a public recognition it never had before and all social actors were called, at one time or another, to implement policies, codes and practices that promoted equality. In this paper we will take a look on how portuguese municipalities addressed gender equality issues, created local policies on the subject and how the implementation processes of these policies promoted, or not, a more equal community and workplace, one where compatibility between personal and professional lives is a reality. The creation of Equality Plans in public institutions was one of the instruments that became popular in recent years and were basis for reforms in organisations that lack any previous concern about gender issues. We will make a critical analysis of these planning instruments, specially of their construction processes and methodological rationale. In analyzing these instruments we will also point out the challenges of tackling an issue with such profound cultural roots and implications and how hard it is to deconstruct perceptions and values that are well established and were important building blocks for the society structure. The paper will focus on the experiences and practices of two municipalities in the north of Portugal, probably the more culturally conservative region of the country, Barcelos and Vila Verde, and will give an insight on the methodology chosen to carry the evaluation, difficulties and challenges the evaluator team met and how they were dealt with, the evaluations main findings, recommendations given and the changes implemented after the evaluation process. Keywords: Gender equality; Gender discrimination; Public Policies; Culture; Labor Policies;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

149

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S5-20 Strand 5

Panel

Valuing Monitoring and Evaluation (M&E) Readiness S5-20 Evidence for Evaluation
O 217

Valuing Monitoring and Evaluation (M&E) Readiness Evidence for Evaluation


Thursday, 4 October, 2012
1 4 : 0 0 1 5 : 3 0
S. Premakanthan 1
1

Symbiotic International Consulting Services (SICS), Ottawa Ontario, Canada

The Treasury Board Secretariat (TBS) of Canada introduced the new Evaluation Policy in April 1, 2009. The policy requires all government program expenditures to be evaluated (mandatory) on a five year cycle beginning 2013. The paper examines the state of program Monitoring and Evaluation (M&E) readiness of government organizations and agencies in meeting the TBS requirements. It explores the response of government organizations and agencies to this policy. More specifically, do government organizations and agencies conduct assessments to determine where they are now and how they propose to bridge the gaps of program M&E readiness through a systematic approach (planned actions)?, similar to the way Audit findings both internal and external are addressed. What are the current practices including tools and techniques that facilitate program M&E readiness assessments? Is systematic M&E readiness evidence gathering by programs and organizations a common practice? Is there a clear understanding of the value of M&E readiness evidence for informed decision making to do evaluations? Program M&E readiness assessments, if done systematically will in the long run avoid investment in evaluations that yield very poor evidence or no evidence about programs and their impacts on beneficiaries. The paper introduces a systematic approach to planning for Program Monitoring and Evaluation (M&E) Readiness and a tool for continuously assessing M&E Readiness of programs. The paper discusses the findings of the annual report on the health of evaluation function (M&E Readiness of the Government of Canada) (2010). Keywords: Readiness Assessment Evidence; Evaluation;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

150

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S3-14 Strand 3

Paper session

Evaluation data and performance assessment I


S3-14
O 218

Lessons learned from a unique database of governance projects evaluation


L. Mackellar 1, A. Ferreira 2, F. Burban 3, E. Tourres 4
1 2 3

Thursday, 4 October, 2012

Independent consultant, UNDEF Evaluation Team leader, Paris, France Independent consultant, Evaluation Manager, Brussels, Belgium Independent consultant, Evaluation Expert, Brussels, Belgium 4 Transtec, Executive Director, Brussels, Belgium

1 4 : 0 0 1 5 : 3 0

In this paper, we draw on evaluations of over seventy small (average USD 250,000) two-year projects in the area of democracy issues (human rights, rule of law, media, youth, gender, and ethnic minority) financed by the UN Democracy Fund (UNDEF). All projects were implemented during 20082011 by NGOs. All projects had as their overriding goal the strengthening of national civil society to serve as a force for democratic development and good governance. The UNDEF evaluation exercise offers a unique database a large number of post projects evaluation of the same size, financed by the same donor, implemented using the same modality, evaluated by a team employing the same methodology and reporting in a standard evaluation report template. The database has already been used as a major component of cluster evaluations of projects in major thematic areas media and youth contribution to Democracy (to date) and in coming months, rule of law and elections-related assistance. These cluster evaluations are being used to explore factors that can help to explain project success or failure. The paper will analyse the evaluation database to derive generalisations about factors promoting or weakening impact in projects of this nature. It will also use the clusters conclusions to look at how different factors operate at the sector level. The proposed paper could significantly contribute to a paper session on governance, networks and information. Among the generalisations that have emerged to date and that could be shared are: The importance of transparency in communication with beneficiaries and follow-up with networks formed as key components for better http://www.un.org/democracyfund/Docs/PostProjectEvaluations.html). programming (e.g., all of these evaluations are public: The emerging use, appropriate or inappropriate, of new technologies in governance projects. The penetration of these technologies, while growing, is still variable, calling for careful assessment in project design. The critical importance of doing a proper, research-grounded baseline assessment. This is particularly the case with training and capacity building, where new technologies and the rise of social networking have called into question the relevance of traditional packages delivered. The constrained role of Project Cycle Management (PCM). Dealing adroitly with political factors can trump poor PCM, just as good PCM can fail when political forces are not well incorporated into project design and implementation. The increasing call by democracy assistance beneficiaries, not for more money, but for more political support, raising existential issues for the donor community. At least 3 team members will be represented at the Conference to participate in discussions regarding challenges faced in such evaluations, such as assessing impact when projects are of small size and short duration, measuring attitudinal and institutional change, and the difficulty of drawing a line between contribution to and attribution of observed impacts at this scale. The conclusions presented can be fine-tuned in cooperation with EES, using the various experiences and lessons learned from UNDEF post projects evaluation to contribute significantly to a paper session on governance, networks and information. Keywords: Governance and support to Democratization; Networking and civil society; New technologies in Democracy projects; Impact evaluation;

O 220

Evaluation scheme development of gender policy of science and engineering related to the performance management
M. Mun 1, H. Lee 2
1 2

Korea Advanced Institute of Women In Science Engineering and Technology, Planning and Policy, Seoul, Republic of South Korea Korea Advanced Institute of Women In Science Engineering and Technology, Director, Seoul, Republic of South Korea

We have developed an on-line performance management system based on the BSC(balanced scorecard) tool for the Korean governmental policy of fostering and supporting women in science, engineering and technology. Focused on the goal of policies, we made a set of performance index and applied them to monitor the progress of projects and to check the level of achievement. In order to feed the results (the output compared to the input) in key performance indices back to the evaluation, we have set a scheme with phases including quantitative aspect and consulting stage to enhance overall productivity of policy. We will introduce an example of evaluation result after applying it to our policy projects. Keywords: Policy of fostering and supporting women in science; Engineering and technology; BSC performance management system; Evaluation scheme of multi-phase;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

151

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 221

Evaluating the lifetime impact of government activities to improve resource efficiency


C. Michaelis 1, B. Leach 2
1

S3-14

Databuild Research and Solutions, Birmingham, United Kingdom WRAP, Banbury, United Kingdom

The UK Department for the Environment, Food and Rural Affairs (Defra) aims to increase resource efficiency in the English economy. They support a variety of activities ranging from public education programmes to investment in renewable energy or recycling infrastructure. The impact of different interventions has different lifetimes; a recycling plant may have a life of more than ten years while a consumer campaign may only make a difference for a few months. The lifetime profile of impacts can also vary; some interventions may see rising impact as take up increases over time, others may see a decline as awareness tails off or technological and social changes make the intervention less relevant.

Thursday, 4 October, 2012

1 4 : 0 0 1 5 : 3 0

In order to have a full understanding of the value for money achieved by a programme it is essential to be able to estimate the lifetime benefits. However, as evaluations are generally undertaken before the end of the interventions lifetime, this requires some element of forecasting. Until 2010, Defra had worked on the assumption that the impact of all activities, other than capital investment, declined to zero over five years. However, as there was no evidence for this assumption, research was commissioned to investigate the lifetime impacts of all Defras resource efficiency programmes in order that evaluations could better reflect reality. Longitudinal research was conducted to evaluate the four outcomes and to establish an estimate of lifetime impacts resulting from different types of activity. Four types of outcome were identified: Decline; where the adoption of a measure tails off or ceases. This could be because the underlying activity ceased (e.g. a firm ceasing to trade) or through behaviour change (e.g. a consumer ceasing to recycle) Ramp up; where the adoption of the measure increases e.g. through increased production or a consumer recycling more Roll out; where an organisation or individual applies the practice to another part of their organisation (e.g. implementing the measure in another plant) or another part of their life (e.g. recycling more products) Replication; network effects where others follow the example of individuals or firms that have been influenced by the activities The findings of this research have been incorporated in an impact model which estimates the lifetime impact and cost effectiveness of all Defras resource efficiency activities. This paper will report on the results of this work; addressing the conference themes of innovation in research, methods and practices and in the evaluation of regional, social and development programmes and policies. Keywords: Lifetime; Sustainability; Impact; Government; Value for money;

O 076

Reframing the role of monitoring in evaluation process


F. Mazzeo Rinaldi 1
1

University of Catania, Catania, Italy

The renewed functions of local authorities, with the gradual spread of participation processes and public services outsourcing, has helped spur the recent evaluation debate on the effects produced by public policies: from a rigorous evidence-based approach, to the identification of analytical dimensions, linked to processes implementation, according to incremental perspective. The different way to interpret policies and public programs, as well as the differences in setting the link of causality between the phenomena under study, has contributed to rediscuss, on one hand, which role to give to the mechanisms underlying public programs and, on the other, the links between contexts, implementation arrangements and effects observed. The paper offers a critical contribution to the inadequate attention paid to the conditions under which is possible to acquire the information required to support the different evaluation perspectives, both at a theoretical and methodological level. So far there has not been significant progress in redefining the monitoring role. It has not received the deserved attention. It appears that the monitoring aims, instruments and practices are increasingly unrelated to evaluation aims. This paper aims to show in which conditions the monitoring systems could retrieve an informative role, oriented to evaluate and to improve public policies. Keywords: Monitoring; Methods; Evaluation;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

152

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-02 Strand 2

Paper session

Capturing the contribution of programs in complex S2-02 environments


O 222

Accounting for alternative explanations through contribution analysis


S. Lemire 1
1

Ramboll Management, Copenhagen, Denmark

Thursday, 4 October, 2012

1 5 : 4 5 1 7 : 1 5

John Maynes introduction of contribution analysis (CA) has attracted widespread attention within the global evaluation community. The numerous sessions on CA at the European Evaluation Society conference in 2010 indicated the sustained and growing interest in CA among evaluators in Europe. In his concept of the embedded theory of change (2011) Mayne addresses the importance of accounting for underlying assumptions and risks, external factors, and principal competing explanations embedded in the program being evaluated. This is a central step in contribution analysis (CA), as one can only infer credible and useful contribution stories if the embedded theory of change accounts for other influencing factors and alternative explanations. However, CA in its current manifestation is not very prescriptive about how to truly engage the relative importance of these factors from a practical perspective. Accordingly, many evaluators default to simply list the most salient factors in the reporting of their findings. In this paper, the presenter will propose the Relevant Explanation Finder (REF): a concrete tool that supports the systematic evaluation of external factors according to their (a) certainty, (b) robustness, (c) range, (d) prevalence and (e) theoretical grounding.The REF provides and operational framework for dealing with influencing factors and alternative explanations and allows for the evaluator to establish a chain of evidence which spans from the early identification of alternative explanations to the final articulation of the contribution story. Accordingly, the component steps underpinning the examination and conclusions regarding the influencing factors and alternative explanations are transparent to the commissioner of the evaluation and the stakeholders involved. To conclude the session, the presenter will engage in a discussion on the practical experiences of using the REF in an evaluation. Keywords: Contribution analysis; Alternative explanation; Theory-based evaluation;

O 223

Evaluating complex and comprehensive strategies: System dynamics modeling, intervention paths and contribution analysis
R. Schwartz 1, B. Zhang 2
1 2

University of Toronto, Canadian Journal of Program Evaluation Dalla Lana School of Public Health & Ontario Tobacco Research Unit, Toronto, Canada University of Toronto, Dalla Lana School of Public Health & Ontario Tobacco Research Unit, Toronto, Canada

Complexity is now the focus of considerable attention in the evaluation community. Several authors, including Patton, describe the challenges posed by complex interventions and propose ways to address them. A recent volume (Evaluating the Complex: Attribution, Contribution and Beyond), co-edited by one of the authors of this abstract, draws attention to the unique challenges of evaluating comprehensive strategies. Comprehensive strategies in areas like poverty reduction, tobacco control, child labour are designed to create synergies, feedback loops that create population level impacts through the interweaving of multiple policy and programmatic interventions. This paper addresses central problems in comprehensive strategy evaluation: attributing population level changes to particular policy and program interventions and accounting for synergies and feedback loops. The approaches to evaluating comprehensive strategies described in Evaluating the Complex provide partial solutions to the challenges identified. Investing in systematic performance measurement and program evaluation is key. Developing quantified path logic models (Toulemonde) is a particularly promising approach. And applying contribution analysis (Mayne) enables evaluators to gain confidence in conclusions even when evaluative information is not robust. Attempts at applying these approaches have proven both challenging and not completely satisfactory. In Intervention Path Contribution Analysis (IPCA) (see Evaluating the Complex), for example, evaluative information was collected on a wide range of interventions. Yet, even in a context of reasonably high investment in evaluation there were large gaps in the availability, and issues with the quality, of evaluative information. Moreover, none of the approaches deal satisfactorily with synergies and feedback loops. Our quest for better strategy evaluation led us to an exploration of the possibility of using system dynamics modeling. This mathematical modeling technique is designed to understanding the behaviour of complex systems over time and is generally used for forecasting trends into the future rather than evaluating past performance. A comprehensive literature search identified eleven published system dynamics models. Only a small number, including SimSmoke can estimate the effects of multiple policies simultaneously, model non-linear changes, and deal with complexity by abstracting the key elements of the system and simulating their dynamic interrelationships over time. The paper will demonstrate the adaptation of SimSmoke, used in concert with IPCA, for teasing out the relative contributions of key components of the Smoke-Free Ontario Strategy to achieving population outcomes and for assessing synergies and feedback loops. To date, we have not seen any reports of direct application of System Dynamics modeling to evaluating comprehensive and complex strategies. Robert Schwartz is Executive Director of the Ontario Tobacco Research Unit, Associate Professor in the Dalla Lana School of Public Health at the University of Toronto, Editor-in-Chief of Canadian Journal of Program Evaluation and Principal Investigator of the CIHR Strategic Training Program in Public Health Policy. Dr. Schwartz directs a comprehensive evaluation and monitoring program and has published widely. Bo Zhang is a Research Officer at the Ontario Tobacco Research Unit with several years of experience in analysing complex surveys. Keywords: Complexity; Strategy evaluation; Modeling; Intervention paths; Contribution analysis;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

153

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 224

Analyzing the European Regional Development Fund through a Contribution Analysis-approach


M. Ardenfors 1, L. Stahl 1, M. Holmstrm 1
1

Rambll Management Consulting, Tillvxtanalys, Stockholm, Sweden

S2-02

Lisa Stahl, Consultant Rambll Management Marcus Holmstrm, Analyst Rambll Management Matilda Ardenfors, Manager Rambll Management The European Regional Development Fund (ERDF) aims to strengthen economic and social cohesion in the European Union by correcting imbalances between its regions. The ERDF finances interventions such as direct aid to investments in companies (in particular SMEs) to create sustainable jobs; infrastructures linked notably to research and innovation and financial instruments (capital risk funds, local development funds, etc.) to support regional and local development. In total, the EU budget for the period 20072013 comprises 862 billion euro, 36 percent of which is dedicated to regional development policies. Sweden has received about 1,7 billion euro in structural funds for this period, more than half of which are channeled through the eight Regional Structural Fund Programs set up in Sweden by a variety of actors at regional and local levels. Each Regional Structural Fund Program has a program document, produced by representatives from municipalities, county councils, authorities, businesses and other actors. Each program has a managing authority with overall responsibility for implementing the program. The Swedish Agency for Economic and Regional Growth is the national authority managing the eight different programs, meaning that the authority handle applications and guarantee compliance with European and Swedish regulatory frameworks. Each of the eight regions also has a Structural Funds partnership, consisting of elected representatives from municipalities, county councils and representatives from labor organizations and other associations. These partnerships are tasked with prioritizing the projects considered eligible by the managing authority, based on a formal assessment. The way the regions organize their work within this multi level governance framework differs. Our research question focuses on whether certain ways of organizing and implementing the allocation of structural funds in the context of regions are more effective than others. Evaluation within the field of regional development demands a methodology that handles complex relationships between structures, policies and actors. This study therefore requires an evaluation approach that takes into account both internal and external factors (Chen 2005) and captures contextual factors influencing outcomes on individual, interpersonal, institutional and structural levels (Pawson & Greenhalg 2004). By using the method of Contribution Analysis (CA) (Mayne 1999, Mayne 2001, Mayne 2004, Mayne 2008) as a methodological approach to evaluate the organization and implementation of the ERDF in different regions within the Swedish national context, one purpose of this study is to show the possibilities and challenges associated with using CA in evaluating regional development. CA can be seen as an alternative approach to answering causal questions in evaluation. However, currently, CA does not address how normative questions can be dealt with. In this paper we seek to expand CA to also address normative questions by combining CA and traditional evaluation methodology. European Commission, Regional Policy Inforegio, The Funds, European Regional Development Fund, http://ec.europa.eu/regional_policy/thefunds/regional/index_en.cfm Using the exchange rate of 2012-03-12. Swedish Agency for Economic and Regional Growth (2008) Develop Sweden: The European Structural Funds in Sweden 20072013 Keywords: European Regional Development Fund; Contribution Analysis; Evaluation;

Thursday, 4 October, 2012

1 5 : 4 5 1 7 : 1 5

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

154

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S4-28 Strand 4

Panel

Addressing the micro-macro disconnect S4-28 in the evaluation of climate change resilience
O 225

Addressing the micro-macro disconnect in the evaluation of climate change resilience


Thursday, 4 October, 2012
1 5 : 4 5 1 7 : 1 5
R. Gregorowski 1, J. Barr 1, P. Silva 1, G. Yaron 2
1 2

ITAD Ltd, Brighton and Hove, United Kingdom GY Associates Ltd., London, United Kingdom

Human actions are thought to have degraded ecosystems more drastically in the last 50 years than in all of history. For millions of people around the world the consequences are increasingly visible. Climate change has tended to hit poor people the hardest; they have the least capacity to prepare for, and withstand such crises. However, more prosperous regions, including Europe, will not escape its devastating impact. Climate change is an issue of global concern, causing changes and requiring action without boundaries or borders. This presents the evaluation community worldwide with one of its most vital and urgent issues: understanding how to evaluate climate change preparedness and intervention thus both resilience and adaptation in a manner that will generate credible, useful knowledge that can help people in all regions of the world prepare for the future. This panel session will focus on a particular evaluation challenge: how to overcome the local to global and global to local disconnect when developing appropriate indicators for climate change resilience. By drawing from a range of programmes that have been supported by the Rockefeller Foundation Climate Change Resilience Initiative and the African Agricultural Resilience and Carbon Markets for Poverty Reduction Initiative, as well as the Strategic Climate Institutions Programme of DFID, the presenters will address four key issues: 1. What are appropriate CC resilience indicators at local, national and global levels? 2. What makes them different to more traditional livelihoods and food security indicators? 3. Is there a case to be made for developing a standardised set or basket of indicators that link local level contexts to national and global levels? 4. Does new technology bring new opportunities for new sorts of indicators to link local level monitoring and experience to national to global levels? Panel Chair: Derek Poate, former Director, ITAD, current President, UK Evaluation Society. The paper by Panellist 1, Robbie Gregorowski, Principal Consultant and Evaluator, ITAD, will summarise ITADs experience in developing climate change resilience indictors to support M&E systems across a number of programmes and within a range of contexts from indicators of community resilience in an African agricultural context, to indicators of institutional capacity to respond to climate change in an Ethiopian national government setting. Panellist 2, Julian Barr, Director, ITAD, will present his experience in indicator development for DFID, particularly developing standardised sets or baskets of indicators which are compatible across contexts and aggregatable at local, national and global levels. Panellist 3, Paula Silva, Independent Specialist in Climate Change M&E, will present the key findings of her recently published paper: Learning to ADAPT: monitoring and evaluation approaches in climate change adaptation and disaster risk reduction challenges, gaps and ways forward. Panellist 4, Gil Yaron, founding director of GY Associates Ltd., will present his SCIP experiences and lessons designing and leading the independent M&E component of DFID Strategic Climate Institutions Programme (DFID SCIP) in Ethiopia with a particular focus on the CC indicators the programme has developed to integrate programme-level CC activities to higher order expected outcomes. Keywords: Climate; Resilience; Indicators; Agriculture; Livelihoods;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

155

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S3-25 Strand 3

Panel

Evaluation Capacity Development for International S3-25 Development: Lessons Learned across Multiple Initiatives
O 226

Thursday, 4 October, 2012

1 5 : 4 5 1 7 : 1 5

Evaluation Capacity Development for International Development: Lessons Learned across Multiple Initiatives
S. Donaldson 1, M. Segone 2, I. Traoret 3, T. Azzam 1, P. Hawkins 4
1 2 3

Claremont Graduate University, Claremont, USA UNICEF, New York, USA AfrEA, Ouagadougou, Burkina Faso 4 Rockefeller Foundation, New York, USA

Evaluation capacity development (ECD) is rapidly becoming a major focus for international development efforts around the world. Multiple ECD initiatives have been introduced in the past several years that attempt to enhance ECD through a variety of activities ranging from monthly training webinars to very intensive evaluation education programs. The purpose of this panel is to introduce some of these ECD efforts and discuss and debate the strengths and challenges of implementing different ECD activities. Each panelist will offer the lessons learned from specific ECD initiatives and the discussant will synthesize this information and involve the audience in these discussions. Panel Chair: Stewart Donaldson: Dean & Professor, Claremont Graduate University. Presentation #1 (15 min) Presenter: Stewart Donaldson: Dean & Professor, Claremont Graduate University. Dr. Stewart Donaldson will provide background and context on what ECD is and the different skills and resources that can be built in an ECD initiative. He will also introduce the audience to the different methods that can be used to for ECD and offer insights from the literature on the benefits of ECD and how it evolved to become a strong focus in international development. Presentation #2 (15 min) Presenter: Marco Segone: Senior Evaluation Specialist, Systemic Management, UNICEF HQ Evaluation Office. Marco Segone will focus on current web-based ECD efforts that are being conducted by UNCIEF and partner institutions. His presentation will highlight the advantages of utilizing a webinar format for reaching evaluators from around the world and enhance their training and knowledge base. Segone will also discuss some of the challenges and limitations of using the online format and share lessons learned from this ECD effort. Presentation #3 (15 min) Presenter: Issaka Traoret, International Development and Monitoring-Evaluation Specialist & Former AfrEA board member & Tarek Azzam, PhD: Assistant Professor, Claremont Graduate University The presenters will share the lessons learned from an ECD efforts conducted during the 2012 AfrEA conference. This activity involved local African evaluators in discussions about what evaluation capacities should be built in Africa. This effort aimed to increase the mutual understanding between funders of ECD and receivers of ECD to help guide the future direction of similar initiatives. Presentation #4 (15 min) Presenter: A representative from the World Bank This presentation will focus on efforts to develop regional Centers for Learning on Evaluation and Results (CLEAR) which are being established around the world to help foster evaluation learning in developing countries. Each center aims to provide intensive evaluation training to local evaluation practitioners to develop local evaluation capacity in each area. The representative will discuss the strengths and challenges of establishing these center and the intended impact of such efforts. Discussant (15 Min) Penny Hawkins: Senior Evaluation Officer, Rockefeller Foundation. The discussant will offer her views on these ECD efforts from the perspective of the funder and an evaluation practitioner. She along with the panel chair will also facilitate the audience discussion around these issues. Audience Discussions (15 min) Keywords: Evaluation Capacity Development; International Development; Evaluation Training; Online training;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

156

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-47 Strand 2

Panel

Meet the Authors


S2-47
O 227

Meet the Authors Round Table


P. Dahler Larsen, K. Forss, R. Stake, N. Stame, I. Davies, E. Stern, E. Vedung Chair: Robert Picciotto Authors: Peter Dahler Larsen, Kim Forss, Robert Stake and Nicoletta Stame Discussants: Ian Davies, Elliot Stern and Evert Vedung A Round Table entitled Meet the Authors will provide eminent evaluation authors with a platform to explain why they opted to generate (or contribute to) a recent publication that articulates a distinctive stance regarding a particular facet of contemporary evaluation theory or practice. The idea behind the Round Table is not to sell or sign books. It is to involve the audience in the contest of ideas that has long characterized the evaluation discipline. Each of the authors will be allocated seven minutes each to outline the key messages of a recent publication they have written or contributed to. Next, each of the discussants will have seven minutes to assess and critique one or more of the theses presented by the authors. This will leave about half an hour for questions from the floor and responses from the authors.

Thursday, 4 October, 2012

1 5 : 4 5 1 7 : 1 5

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

157

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S5-09 Strand 5

Paper session

Evaluation in a European context II


S5-09
O 228

Learning from evaluation in the European Commission


S. Hojlund 1
1

COWI A/S, Lyngby, Denmark

Thursday, 4 October, 2012

1 5 : 4 5 1 7 : 1 5

This paper addresses the following main question: Do we know everything there is to know about organisational and policy learning created by evaluations? To answer the question, the paper uses data from an ex post evaluation of the European Commissions LIFE programme (200809). Moreover, the paper will address a few central theoretical implications with regard to learning from evaluations in complex supranational polities such as the European Commission. This is done by briefly highlighting some of the latest organisational learning theories and relating them to the general evaluation literature on evaluation utility and use. An important theme of the paper is the Commissions notorious demand for accountability but also legitimacy. Evaluations are one of the primary tools to gain valid findings and conclusions on political interventions and their performance. Nevertheless, experience shows that learning from evaluations understood narrowly as an impact on intervention- or policy reform is intangible and sometimes non-existent. Despite a formal emphasis on learning from the evaluating organisation, this official emphasis is often forgotten in the implementation phase, when e.g. the terms of reference is defined narrowly or the evaluation-process is single-handedly run by senior managers. However, a failure to learn from evaluations and communicate best-practises and lessons learned hampers public access to policy and programme assessment results and the possibility of verifying them. Further, it weakens transparency of polity and the democratic process and good evidence-based governance. So finally, a failure to learn and reform political interventions from that knowledge might actually end up decreasing legitimacy. This paper however, will try to go a little further than this long-known problem and challenge to political organisations, evaluators and evaluations. A narrow definition and understanding of what learning is, might entail a failure to acknowledge important knowledge gains on many different levels of governance. Moreover, learning can take many forms and the vast data material of an evaluation can be used to many ends. Therefore, this paper argues that it is important to understand the nature of learning and open up the pandoras box of learning theory used to describe learning in organisations. Evaluators might be surprised to find that a good evaluation can have other impacts on learning than they had foreseen initially. The data used in this paper will be interviews with the desk officers and external consultants responsible for the ex post evaluation of the European Unions LIFE programme. The evaluation was implemented in 200809 during the time of the successor programme LIFE+. As a new generation of the LIFE programme has been formulated, this process can be studied and possibly traces of the evaluation results of the ex post evaluation can be found in the new programme. This study is underway along with the enquiry into whether other types of learning also took place during and after the implementation of the ex post evaluation. Keywords: Organisational learning; Evaluation use; Learning; Accountability; Summative;

O 229

Designing Regional Model of Knowledge Transfer. Case Study from Poland


S. Krupnik 1
1

Jagiellonian University, Center for Evaluation and Public Policy Analysis, Krakw, Poland

While the knowledge transfer between universities and enterprises is acknowledged to be one of the critical factors for the high competitiveness of the national economy, its low intensity in Poland is well documented. Public policies in the country focused on the supply side of the transfer thus far. Therefore, they failed to engage demand side (i.e. enterprises). The aim of the project reported in the presentation (SPIN) is to design the more demand oriented regional model of knowledge transfer. The project is carried out by the partnership of regional government and universities. The model is supposed to provide major stakeholders (regional government, universities and other R&D units) with tools useful for the enhancement of knowledge transfer. The process of model building will consist of three phases: design, implementation and evaluation. Various methods within the process of model design were applied: desk research, workshops and interviews with stakeholders as well as consultations with experts. The process of the model design takes into account two complementary dimensions: descriptive and normative. The descriptive dimension relates to the system of knowledge transfer. Its conceptual frame includes concepts of environment, actors, actions and effects of knowledge transfer. The normative dimension is based on theory-driven evaluation approach and conceptual frame from this approach is implemented. The model will consist of four submodels: processual (relating to procedures, databases and good practices which could make relevant processes more effective), analytical (e.g. including evaluations and other research), educational (e.g. trainings, study visits) and promotional (e.g. conferences). Subsequently, the model will be tested within three knowledge areas (i.e. green building, smart energy networks, translational medicine) in Malopolska Region. The evaluation of the implementation will provide feedback for model improvement. Seweryn Krupnik Ph.D. lecturer and researcher at the Center for Evaluation and Analysis of Public Policies (Jagiellonian University, Krakow, Poland). His current research interests focuses on theory-based evaluation, entrepreneurship policy and institutional analysis. Keywords: Knowledge transfer; Technology transfer; Poland; Program theory evaluation;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

158

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 230

Strategies drafters and evaluators are far away from the citizens case of the Development Strategy for Lodz 2020+ (Poland)
A. Weremiuk 1
1

Proewal Ewaluacja i Doradztwo Alicja Weremiuk, Warsaw, Poland

S5-09

The aim of the presentation is to arise a question on a fair balance between public/majority and innovators opinions within the process of the city/region strategy drafting. The second part focuses on methods, also not standard ones, that could be applied to ensure that equilibrium. The first part of presentation is described by the Polish case study of the process of drafting and consulting the 2020+ Development Strategy for Lodz (city in central Poland). The second part is based on a review of methods applied most often within the ex-ante evaluations of regional/city strategies and non-standard methods (e.g. applied within branding designs, public consultations, etc.). The public consultation process of the Development Strategy for Lodz shows a big gap between the citizens and administration where the development vision of the city is concerned. Administrations vision focuses mainly on investments attraction based on the enhancement of the creativity and metropolis functions while the citizens vision takes into account at first inhabitants quality of life (which is not surprising and do not have to be contradictory). In the case analysed the measures to implement the vision were contradictory. Was such a situation avoidable? What methods could have been applied in order to minimise such risk? What should be more valued methods that collect the view of the majority/public representatives (e.g.: questionnaires, MAMCA, press analysis) or leaders/innovators opinions. How to collect leaders/innovators opinions within the networked society? Could evaluators apply city games, RPG, blogs, etc.? How to mix standard and new methods? Keywords: New methodology; Strategy development; Public opinion;

Thursday, 4 October, 2012

1 5 : 4 5 1 7 : 1 5

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

159

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-28 Strand 2

Paper session

Perfomance monitoring and evaluation tools


S2-28
O 249

Community Giant Scoreboards: An innovative Community-led Monitoring and Evaluation Tool; Evidence from Community Initiative on Maternal, Newborn and Child Survival Project in Northern Ghana
J. Ajaari 1, M. Ali 1, Z. Iddrisu 1, K. Ozar 2, N. Van Dinter 2, L. Washington-Sow 2

Thursday, 4 October, 2012

1 5 : 4 5 1 7 : 1 5

Catholic Relief Services (CRS), Tamale, Ghana Catholic Relief Services (CRS), Accra, Ghana

Introduction: Maternal and neonatal deaths in Ghanas Kasena-Nankana West and Talensi-Nabdam districts are attributed to socio-cultural practices that negatively affect access and use of maternal and child health (MCH) services. The Community Initiative on Maternal Newborn and Child Survival (CIMACS), a three-year pilot project privately funded through Catholic Relief Services, is designed to improve survival of women (1545 years) and children (036 months) by addressing negative socio-cultural practices in the target communities. The project aimed to improve community access and individual health-seeking behaviors; CIMACS designed a tool to promote active community participation in interventions and measure community contributions in improving performance. Intervention: Community Giant Scoreboards (CGS) is a tool constructed and managed by community members to visibly track improvements in MCH outcomes. The CGS has pictorial illustrations of desirable and undesirable MCH outcomes. Scoring is done using two sets of ten colored sticks: desirable (green) or undesirable (red) to illustrate outcomes. Beneath the CGS picture illustrations is a frame with ten slots, each representing 10%. Data for scoring is generated from community and clinic registers for all indicators; indicators are scored and updated monthly. In each community, the CGS committee updates the scores, shares the findings and engages the larger community in assessing, analyzing and agreeing on key actions to take based on the outcomes. Results: CGS coupled with strategies such as: Positive Deviance, Community Pregnancy Surveillance Education and the repositioning of traditional birthing attendants as link providers/facilitators of institutional childbirths, has led to improvements such as increasing antenatal registrants (first trimester) by 25%; attending 4+ antenatal visits improved by 22%; an increase of 55% for institutional deliveries; and increased exclusive breastfeeding by 25%. Community members are inspired to take ownership in CIMACS interventions and in monitoring their performance on MCH indicators. CSG displays create competition between communities within the same District to attain the highest MCH performance and generate a sense of pride when progress is visibly displayed. Conclusion: CGS is both a feedback and motivational tool in MCH and other community-based interventions. It is recommended for use as M&E feedback tool on project performance to beneficiaries.

O 250

Vignette Analysis How to measure degree of unified performance in The Norwegian Food Safety Authority
G. Jacobsen 1
1

The Office of the Auditor General of Norway, Performance Audit II, Oslo, Norway

Introduction: This paper will give an introduction to how a vignette study can be used to compare how unified the regional and district offices of the Norwegian Food Safety Authority (NFSA) are when interpreting the laws and regulations. Equal interpretation is essential in a modern democracy and public administration. The audit of the NFSA has been performed by the Office of the Auditor General of Norway (OAG Norway) and was published in January 2012. The NFSA was established 1st January 2004 and is responsible for Food Control, Fisheries seafood control, Animal Health and Agricultural supervision. NFSA is subordinate to the Ministry of Agriculture and Food. The vignette study; a suitable method to measure equality in performance: A vignette study implies that identical cases (vignettes) are sent to the same type of public agencies. Vignettes are developed based on real cases, and the purpose of the study is to obtain data that can identify variations in procedures and decisions. The method is suitable to identify deviations and differences in interpretation. The investigation does not give a solution as to a decision is correct or the most appropriate. The aim of the vignette study is to examine if the same or similar decisions are made based on the same information. To some extent it will be acceptance for the individual officers interpretation in discretionary procedures. Any deviation in a vignette study must therefore represent significant variation, and more variation than considered acceptable. In this study the case vignettes were designed to investigate if the NFSA made uniform decisions in areas such as animal welfare, drinking water and food safety.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

160

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

To make good cases we first observed inspections and the Authoritys head office sent samples of typical cases to the OAG. We collaboration with experts and researchers, and to ensure that the vignettes were appropriate and contained sufficient information, the NFSAs head office was involved in the preparation and quality control before the vignettes were sent out. The investigation presented nine vignettes. Each vignette case (at district level) was sent to approximately 20 offices. All eight regional offices received the same vignette.

S2-28

Results: The investigation showed that there was significant variation between the district offices. One example illustrates the wide gap in the responses. On the basis of the same facts, seven offices decided an immediate temporary closing down of the restaurant, while the other 13 offices made other less severe decisions. Conclusion: By using the vignette approach, the OAGs investigation has identified severe differences in the way the NFSA makes its decisions in fields crucial both to public health, animal welfare and for the food industry. As exactly the same facts were described, local variations could be ruled out.

Thursday, 4 October, 2012

1 5 : 4 5 1 7 : 1 5

This method proved useful in a decentralized and complex organization, but has also provided us with an objective tool that will be useful also for the NFSA efforts to harmonize their future decisions. Keywords: Vignette analysis; Deviations; Equal interpretation; Collaboration with experts;

O 251

Measuring Efficiency in Welfare Services: Methodological Approach and Implications for the Added Value of Performance Auditing
G. Teigland 1, E. Servoll 2, M. Grydeland 2
1 2

Riksrevisjonen, F1.3, Oslo, Norway Riksrevisjonen, F1.5, Oslo, Norway

Efficiency in the production of public sector services is of great importance in order to reduce the operational cost of the welfare system and lending legitimacy to the welfare state. However, assessing public sector efficiency meets some specific obstacles. This paper presents an approach for assessing efficiency, by measuring labour productivity. The measured productivity levels are in turn seen in relation to levels of goal achievement on various critical performance indicators. The paper draws upon a performance audit by the Office of the Auditor General of Norway, submitted to the Norwegian Parliament in 2012. The audit assessed the efficiency of the Norwegian Labour and Welfare services, which administrates benefits and grants with an annual worth of 35 billion Euros. The paper presents our methodological approach for labour productivity assessment, focusing on a work load analysis. The paper particularly addresses the methodology used in order to reduce the large number of complex work practices into standardized and comparable units. Furthermore, we show how the approach is suitable for benchmarking and assessing labour productivity across offices, and thereby showing the potential for increased efficiency in the services. The ultimate goal of the welfare services is to provide high quality services to its users. Therefore, when assessing labour productivity, it is also important to consider parameters that are important to the users, such as processing time, quality of decisions etc. The paper also outlines the importance of a complementary qualitative case study approach, trying to pinpoint and explain the causes of variations in labour productivity. Finally, the paper addresses the concept of added value, as one of the aims of performance auditing is to promote good government. The OAG Norway is committed to promoting a vital relationship between auditor and auditee. We discuss how dialogue with the auditee throughout the auditing process, and responsiveness to auditee feedback on methodology and results may enhance the use and the learning potential of the individual audit. Keywords: Efficiency and productivity; Added value; Welfare services; Goal achievement; Performance audit;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

161

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S1-25 Strand 1

Panel

Using theories of change and evaluation S1-25 to strengthen networks: the case of evaluation associations
O 232

Thursday, 4 October, 2012

1 5 : 4 5 1 7 : 1 5

Using theories of change and evaluation to strengthen networks: the case of evaluation associations
Z. Ofir 1, M. Tarsilla 2, J. Rugh 3, N. Wally 4, R. Santos 5
1 2 3

International Evaluation Advisor, Gland, Switzerland Western Michigan University Independent Consultant 4 AfrEA President, Cairo, Egypt 5 WorkLand M&E Institute, Malaysia

Associations are important vehicles for cultivating a sense of community, opportunities for information sharing and development of expertise among evaluation stakeholders. But in a world where power is shifting, competition for resources is increasing, traditional value systems are under threat, and new technologies are pervasive, evaluation associations roles may need to shift to provide more focus on innovation, concerted thought leadership and muscle at national, regional and global levels. It is therefore increasingly important that evaluation associations interrogate and use their strengths, weaknesses, comparative advantage and niche to operate more effectively for the benefit of the profession. But discussions around the associations are seldom evidence-based, analytical and systematic. This panel will therefore attempt to bring more understanding and rigour to the design and management of evaluation associations by synthesising current knowledge based on the following: useful network typologies network theories of change lessons from evaluation association and network case studies on factors that determine success and failure implications for the M&E of evaluation associations. Zenda Ofir will highlight the key differences between different types of networks, the main lessons learned to date about their design, and the implications for the development of an archetype theory of change for evaluation associations that can inform an M&E framework. Nermine Wally will discuss the application of these concepts, using the example of the African Evaluation Association, with reference to the context and structural factors that influence development of evaluation networks in the region. Michele Tarsilla will discuss the disconnect between the supply and demand side of evaluation and how this affects the theories of change of associations, using South Africa, Niger and the DRC as case examples. Jim Rugh will serve as discussant for the session. Zenda Ofir is an international evaluator working in Africa and Asia. A former AfrEA President, IOCE Vice-President, and AEA Board and NONIE Steering Committee member, she he conducts evaluations, facilitates the development of useful M&E systems, and provides evaluation advice to international organisations such as GAVI, the CGIAR, several UN agencies and the Rockefeller Foundation. Michele Tarsilla is Co-Chair of the International and Cross-Cultural Evaluation Topical Interest Group of AEA, and Associate Editor of the Journal of Multidisciplinary Evaluation (JMDE). He has conducted evaluations for World Bank, FAO, WFP, UNAIDS and USAID projects in sub-Saharan Africa and Latin America. A former Fulbright Scholar, Michele is currently completing his Doctorate in Interdisciplinary Evaluation with a thesis on Evaluation Capacity Development. Nermine Wally, an Egyptian national, is the AfrEA President, past Secretary of IOCE and a Senior Governance Specialist in the Egyptian Cabinet, working on strategies to respond to Egypts developmental needs. As socio-economic researcher she works with NGOs, youth, women and rural households in Egypt and Africa. She is currently pursuing graduate studies in Paris. Jim Rugh, was head of Design, Monitoring and Evaluation for CARE International and a former AEA representative to IOCE. He co-authored the popular and practical RealWorld Evaluation book and has led numerous workshops on that topic for many organizations and networks in many countries. In recognition of his contributions to the evaluation profession he was awarded the Alva and Gunnar Myrdal Practice Award by AEA in 2010. Keywords: Networks; Network evaluation; Evaluation associations; Theory of change;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

162

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S4-23 Strand 4

Paper session

Evaluation of Health Systems and Interventions II


S4-23
O 233

Responsive evaluation of an empowerment program in residential care settings: who benefits?


T. Abma 1, L. Dauwerse 1, W. van der Borg 1, P. Verdonk 1
1

VU University Medical Center, Dept of Medical Humanities, Amsterdam, Netherlands

Thursday, 4 October, 2012

1 5 : 4 5 1 7 : 1 5

Compassion and work satisfaction among caregivers is often thought to be a prerequisite for good care in residential settings. Training and development of care givers and facilitating leadership are mentioned to be characteristics of best practices in elderly care (www.myhomelife.org.uk). The relation between quality of work and quality of care has however not yet been systematically studied. What we do know is that work satisfaction is often under pressure due to the large case load. Care workers are part of a Tayloristic system. Many workers are female, a growing number is older and has a different ethnic background than the residents. Although the majority is working part time, the time to recover is often to short, and many run the risk of overburdening, and are going on sickness leave. In combination with low salaries and negative image of the sector many feel not recognized and disrespected. To counter this problem a program was developed and implemented in four residential settings. This program was entitled From armour to summer dress and aimed to increase the productivity, work and client satisfaction. A quantitative evaluation study could not measure improvements. This was not in line with the experiences within the institutions, where improvement was perceived on face value. Therefore a responsive evaluation has been started in three of the participating institutions. This project is carried out by a team composed of researchers with a background in (care) ethics (Dauwerse, Abma) and organizational psychology and work (Verdonk, Van der Borg). Initial results from the evaluation show that for workers in residential settings relations matter, however, the quality of these relations is under pressure. The original goals of productivity, satisfaction and quality of care gain a different meaning in the context of practice. Care givers emphasize the importance of the relational quality of interactions between caregivers and the organization, among caregivers, and between caregivers and residents. Many however experience feelings of disconnectedness and powerlessness. Although workers experience the program as an attempt to restore broken relations between care givers, clients and the social context of clients, many are ambiguous about its results given the working conditions (bureaucracy; lack of staff; lack of qualified staff). Also, participants have the paradoxical feeling that the program was implemented in a way (top-down, with deadlines eg) contradictory to its intentions. Although our evaluation study aims to improve the empowerment program with help of the findings from various stakeholder perspectives, it is a question how and whether we can succeed in this given the political situation at the moment (cost cuttings eg). Our evaluation may even have the reverse effect: downplaying the initiatives and efforts going on in practice.

O 234

Evaluation as Soft and Hard Governance: Implications of two Swedish evaluation systems used in elderly care
A. Hanberger 1, L. Lindgren 2
1 2

Umea university, Applied Educational Science, Umea, Sweden Gothenburg university, School of Public Administration, Gothenburg, Sweden

There is a lack of knowledge as regard how prevailing evaluation systems work and interplay, and what significance they have in different welfare practices. The aim of this paper is to explore effects and consequences of two evaluation systems commonly used in Swedish elderly governance. The first evaluation system, Open Comparison and Assessment, can be described as a case of soft governance as it is based on ranking and benchmarking logic. A municipality is assumed to consider how it scores on a number of performance indicators, and mainly take action if scoring lower than the average municipality. Those with average scores are expected to improve their position and top scoring municipalities to maintain their position. The main mechanism underpinning this evaluation system is blame and shame. Municipalities and service producers are expected to deliberate on their own practice and take action to improve elderly care and service. The second evaluation system is the inspection system used by the Swedish National Board of Health and Welfare. This system can be described as a case of hard governance as it is set up to strengthening state control of elderly service quality and to attain compliance with statues. The mechanism underpinning this system is threat of sanctions. Municipalities and elderly service producers are hold to account for providing elderly service meeting a minimum quality standard and for complying with statues. The inspectors control public and private service providers through pre-announced, unannounced inspections and desk-top reviews. Service users, the elderly and relatives, can make complain to initiate desk-top reviews or inspections as well. The methodology used in this paper rest on a conceptual framework reflecting the interplay between evaluation and governance, and possible functions of evaluation in policy and governance. Program theory methodology is used to unpack the assumptions underlying the two evaluation systems. The paper is developed within a research project that explores effects and consequences of the two evaluation systems for elderly care practice. At this stage of the research process the focus is on unfolding the program theories of the two systems and to explore the systems implications for practice. The paper provides preliminary results of our research and mainly on the assumptions of the two evaluation systems and on evaluation governance. It also discusses preliminary findings of the systems effects and consequences for elderly care policy and practice. Keywords: Evaluation governance; Evaluation system; Methodology; Elderly sector; Functions of evaluation;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

163

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S4-19 Strand 4

Paper session

Real time evaluation for decision-making


S4-19
O 236

The monitoring and evaluation system of Ghana with a focus on the link between planning and budgeting
C. Amoatey 1
1

Ghana Institute of Management and Public Administration (GIMPA), GIMPA Graduate School of Business, Accra, Ghana

Thursday, 4 October, 2012

1 5 : 4 5 1 7 : 1 5

Policy makers are increasingly looking for evidence to support decisions and to evaluate the impact of resources utilised. Monitoring the performance of public programs and institutions must help inform the budgetary process and the allocation of public resources. In particular, Governments should seek to use the results obtained through their M&E systems to improve resource allocation and national development planning. This paper presents a case study on Ghanas diagnostic processes of budgeting at the national level. It discusses Ghanas attempt to link national policy to development planning, the national M&E framework and the budgeting system with the necessary feedback mechanisms, and how this approach has contributed to its attainment of middle-income status. It identifies current challenges with the planning-budgeting alignment process and proposes strategies for addressing these constraints. The paper concludes that aligning planning and budgeting cannot be effective without the necessary institutional, operational and technical capacity development at all levels of government. Keywords: Monitoring; Evaluation; Planning; Budgeting; Alignment;

O 237

Dealing with complexity through Planning, Monitoring and Evaluation Results of a collaborative action research
J. Van Ongevalle 1, A. Maarse 2, C. Temmink 3, E. Boutylkova 3, H. Huyse 4
1 2

HIVA (Research Institute of Labour and Society Katholieke Universiteit Leuven, 3000 Leuven, Belgium Independent Consultant, Kampala, Uganda 3 PSO Capacity Building in Development Countries, The Hague, Netherlands 4 Research Institute of Work and Society Catholic University Leuven, Leuven, Belgium

This paper reports on the results of a collaborative action research process (20102012) in which 10 development organisations (nine Dutch and one Belgian), together with their Southern Partners, explore different Planning, Monitoring and Evaluation (PME) approaches with the aim of dealing more effectively with complex processes of social change. Some of the PME approaches that were piloted include outcome mapping, most significant change, sense maker and client satisfaction instruments. A major challenge that organisations were trying to address during this action research pertained the demonstration of observable results in complex contexts where such results are not always easy to measure or to quantify (e.g. deep cultural change in a gender mainstreaming programme) and where causal links between cause and effect cannot always be predicted in advance (e.g. the results of supporting geographically dispersed informal networks of women groups working around violence against women). We first outline the methodology of the collaborative action research that was used by the participating organisations to make their PME approach more complexity oriented. We also explain the rationale for the organisations to participate in the action research. Drawing from recent literature, we elaborate on the main challenges for PME when dealing with complex processes of social change. At the same time, we highlight how some of the PME approaches that were piloted during the action research, helped organisations to address these challenges. This is done with the help of an analytic framework to assess the effectiveness of a PME approach in dealing with complex social change. This framework is build around the following four dimensions: How does the PME approach contribute to: 1) Strengthening relationships, roles and expectations of the actors involved in the intervention? 2) Learning about the progress towards the development objectives (of the programme, partner organisations, partner networks, Northern NGOs)? 3) Satisfying downward, horizontal and upward accountability needs?| 4) Strengthening the own internal adaptive capacity of the programme, partner organisations, partner networks, and/or Northern NGOs? Throughout the paper we provide illustrative extracts from the various action research cases. Finally, we explain how a balanced PME approach is more than the PME tools and also involves an agenda, underlying principles and a specific way of implementation. We also describe the main lessons and practical recommendations that can help organisations to develop a more complexity oriented PME practice. Bios of the authors: Jan Van Ongevalle and Huib Huyse are research managers at the Research Institute of Work and Society at the Catholic University of Leuven in Belgium Anneke Maarse is independent consultant in Planning, Monitoring an Evaluation and organisational Development. Cristien Temmink and Eugenia Boutylkova are programme officers at the Dutch Organisation PSO? Keywords: Planning; Monitoring; Evaluation; Complexity; Learning;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

164

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 238

From nice-to-know to need-to-know; reducing the indicator set for Budget accounting
A. Gaaff 1, J. Butter 2
1

S4-19

LEI Wageningen UR, Regional Economy and Land Use, The Hague, Netherlands Ministry of Economic Affairs Agriculture and Innovation, The Hague, Netherlands

The former Dutch department of Agriculture, Nature and Food Quality has reduced the set of indicators used in the Budget by 50 %. At the same time emphasis shifted from input indicators and general key figures to result and outcome indicators, which contributed to the accountability to Parliament on central policy issues. The background of this operation was the idea that the Ministry collects lots of information without a clear understanding of its appropriateness or its correct quantity. The smaller and more outcome oriented set provided a framework to assess the necessity and usefulness of the policy information system.

Thursday, 4 October, 2012

1 5 : 4 5 1 7 : 1 5

The Policy Information Program initiated and performed the task by developing a common language, a practical guideline for policy advisors to formulate indicators and a set of rules for need-to-know information. This approach contributed to accentuate policy objectives. It also generated a process to explore the boundaries between need-to-know and nice-to-know policy information. This paper describes the process and its main results as well as conclusions and recommendations for similar situations. The process focussed on the one hand on creating internal support by using the common language of objectives trees and regular workshops and on the other hand on creating external support by informing a parliamentary committee and the Chamber of Audit. The main conclusion is, that the number of accounting indicators can be reduced, provided that the process is backed by high level management and trustworthy stepping stones can be presented between need-to-know and nice-to-know. Keywords: Indicators; Accounting; Policy information; Government; Budget;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

165

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-24 Strand 2

Paper session

New methods for impact evaluation


S2-24
O 239

Counterfactual impact evaluation: where it works and where it doesnt


D. Mouqu 1
1

European Commission Directorate General Regional Policy, Evaluation, Brussels, Belgium

Thursday, 4 October, 2012

1 5 : 4 5 1 7 : 1 5

Recent years have seen an increasing use of rigorous evaluation techniques, including comparison and control groups (the so-called counterfactual). As with any innovation, this has sometimes created an ideological debate between partisans and contras. The current article draws on DG Regional Policys experience to argue that counterfactuals are an evaluation tool like any other: there are pragmatic considerations which determine the situations to which they can be applied and situations where they cannot. In addition, there are things a counterfactual can reasonably be expected to tell us and things it cannot, without recourse to other tools. The article finishes with suggestions for future steps in the use of this technique. Keywords: Counterfactual; Impact evaluation; Cohesion Policy;

O 240

An Ex-Post Financial Appraisal of a Public-Private Partnership Project Using Monte Carlo Simulation
M. Uzunkaya 1
1

Middle East Technical University, Department of Business Administration, Ankara, Turkey

Public-Private Partnerships (PPPs) have become a popular alternative method in developing countries to finance and operate infrastructure projects in the face of ever increasing demand for better infrastructure and decreasing capacity of public sector budgets. PPPs can be used in financing infrastructure projects without much pressure on public budgets, provided they are structured with up-most care and analyzed thoroughly before and after implementation. PPPs include a multiplicity of risks, materialization of which can have detrimental effects not only on the realization of project benefits, but also on public budgets due to possible contingent liabilities. Because PPPs involve long terms contractual agreements, such risks should be monitored throughout the life of projects, without limiting the monitoring only to construction stage. In this context, ex-post analyses offer as much valuable information as ex-ante appraisals and provide important lessons to be utilized for future projects. Utilizing realized data obtained from the initial years of operation, ex-post analyses present a comparison base of before and after situations, making it possible to more precisely analyze the future prospects and risks of projects. For this purpose, Monte Carlo simulation techniques are suitable tools for assessing the effects of risks on project outcomes in PPP projects. The current study develops a financial appraisal model to make an ex-post evaluation of a PPP project and uses Monte Carlo simulation techniques for risk analysis. With slight modifications, the model and technique can be utilized for future PPP projects in assessing their decision parameters and determining an optimum concession period. Keywords: Ex-Post Project Appraisal; Financial Appraisal; Public-Private Partnerships; Monte Carlo Simulation; Private Finance in Infrastructure;

O 241

Preconditions for measuring impacts of development programs How can ex-ante evaluations be useful in the planning stage?
S. Silvestrini 1, S. Krapp 2
1 2

CEval Consult GmbH M&E Unit, GIZ

Over the past years the Deutsche Gesellschaft fr Internationale Zusammenarbeit (GIZ) has enforced the impact orientation of its measures. Main procedures and instruments (framework of orders, preparation and implementation of programs, reporting), as well as monitoring and evaluation have been adapted to impact orientation. Yet, development cooperation programs often take place under highly volatile framework conditions. Formerly effective cause?and?effect hypotheses lose their validity during the implementation process due to social, political or economic changes, forcing program planners to continuously adapt their strategies to achieve the initially intended goals. The experience from final and ex?post evaluations shows clearly that in many cases results chains and indicators developed at the start of a program are of limited use or have to be revised in order to assess objectively its outcomes and impact. While linear planning and implementation approaches base on more or less static assumptions about the organizational and systemic program environment, implementation reality provides a different picture. Accordingly impact evaluations require methods and instruments not only to identify and attribute clearly the intended and non?intended effects of an intervention but also to capture the complex interdependencies between all factors that influence its implementation, outcome and impact. By applying ex-ante evaluations GIZ wants to allow for these challenges and improve quality, relevance and comprehensiveness of the program design and assure the methodological prerequisites for assessing impacts later on. The evaluation unit of GIZ on behalf of the Federal Ministry for Economic Cooperation and Development (BMZ) has applied an ex-ante evaluation of the new program Skills Development for Climate and Environmental Business Green Jobs in South Africa as a new instrument within its evaluation system. The objectives of this evaluation which has been carried out by the Center for Evaluation (CEval) in close cooperation with the appraisal mission were: 1) The design for the new program was reviewed based on the analysis of needs and other information acquired during the appraisal mission. The goal was to make the results chain transparent and verifiable for all involved stakeholders and to develop indicators to measure results. This also took into account the validity and reliability of the assumptions about the framework conditions.
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

166

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

2) The impact assessment was carried out to assess the effectiveness of the program and the sustainability of its impacts based on the results of the analysis of needs and conceptual analysis, taking into account further background information from secondary and empirical studies on the framework conditions in the areas impacted by the programme. 3) As soon as the programme concept was established, the current status of the areas impacted by the programme (baseline) was documented. A distinction was made between both the individual stakeholder groups (program staff, program executing agencies, partners, target groups, etc.) and the different results levels (individual, organisational, systemic). The results of the baseline study served as a reference for program monitoring and future measurement of results in the course of evaluations. The inclusion of comparison groups will enable the unambiguous attribution of impacts in future evaluations, utilising quasi-experimental study designs. 4) A proposal for future monitoring and evaluation of the achieved program results was finally developed on the basis of the acquired information. In the course of the session following questions will be raised and can be discussed with the participants: What are prerequisites of measuring impact and the role of ex-ante evaluations? What is the added value of an ex-ante evaluation in comparison to a regular program appraisal? Does the evaluation contribute to a better program planning and steering? How can the ex-ante evaluation prepare the ground for measuring impacts? What role plays the developed impact oriented monitoring system?

S2-24

Thursday, 4 October, 2012

1 5 : 4 5 1 7 : 1 5

Presenterss Bio: Dr. Stefan Silvestrini Dr. Stefan Silvestrini is the CEO of the CEval Consult GmbH which he founded together with Prof. Stockmann in 2011. He is working since 12 years in the field of evaluation research, primarily in the context of development cooperation, vocational education, labor market and health. Mr. Silvestrini has collaborated at a number of publications and research papers on evaluation methodology and has a strong theoretical and methodological background, particularly on quasi-experimental evaluation designs, qualitative and quantitative data collection and analysis and technology assessment. Furthermore as a consultant he is widely experienced in developing and implementing evaluations and monitoring systems. Furthermore he has conducted a number of evaluation trainings and coaching measures, amongst others for the Worldbank, the Austrian Development Cooperation or the GIZ. Dr. Stefanie Krapp Sociologist; employments as Assistant Researcher at the Department of Sociology at the University of Koblenz-Landau, as free-lance consultant for German development projects mainly in Egypt and South East Asia developing and implementing M&E-Systems and carrying out impact evaluations, and as Assistant Researcher at the Center for Evaluation at Saarland University focused on the evaluation of projects in the fields of education, vocational education and international cooperation and on developing and conducting trainings in evaluation; here she also received her PhD in Sociology; for one and a half years she advised the German Development Service in Labour Market and Vocational Education Research in Laos (200607); after that she was an Integrated Expert at the University of Costa Rica in M&E for CIM-gtz, a German development organization (20082010); since April 2010 she is a Senior Evaluation Officer at giz head quarter in Germany. Keywords: Ex-ante evaluation; M&E; Impact assessment; Green jobs; South Africa;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

167

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S3-06 Strand 3

Paper session

Evaluation capacity building and regional S3-06 development


O 243

Do Structural Funds of the European Union contribute to development of durable evaluation culture in Polish regions?
Thursday, 4 October, 2012
1 5 : 4 5 1 7 : 1 5
A. Januszkiewicz 1
1

Technical University of Lodz, Department of European Integration and International Marketing, Lodz, Poland

Background: The results of international research suggest that countries which introduced evaluation in result of external requirements developed less mature and less durable evaluation culture (e.g. research of Furubo, Rist and Sandahl). The external pressure can be a good starting point of evaluation capacity development. In Poland, like in many other new members of the EU, the requirement to evaluate the Structural Funds (SFs) gave the first stimulus to introduce evaluation in regional administration. It had a considerable impact on organisation, functions, scope and principles of evaluation of the regional operational programmes (RPOs). The interesting question is what are the factors which may contribute to creation of sustainable evaluation culture in regional public administration? The issue of evaluation capacity building influenced by the SFs is of relevance for many European regions. Objectives: The main objective of this paper is to analyse and indicate factors influencing development of evaluation capacity in Polish regions. It seeks to answer the following questions: What is the impact of the system of evaluation of ROPs on development of evaluation capacity in Polish regions? What is their evaluation capacity after five years of experiences? What are the factors facilitating or hindering development of durable evaluation culture? Methodology: The research covered all 16 Polish regions, including detailed case studies of 6 regions. It was focused on demand side of the evaluation capacity and based on the analytical model which assumed three channels of evaluation capacity building: organisation of evaluation system, evaluation practice and intentional actions to develop evaluation capacity. The quantitative and qualitative methods were used: questionnaire and interviews mainly with regional officials responsible or involved in evaluation of ROPs. Results: The results of the research show that the major impact of the SFs in evaluation capacity building is through creation of a considerable demand for evaluation and by securing adequate resources to conduct it. The EU evaluation model influenced the creation of evaluation system in which evaluation is situated in an executive branch of regional self-government, based mainly on external evaluation and focused on programme improvement. There is two major weakness of this system. Evaluation is limited mostly to ROPs and it is perceived as a part of the SFs management. This may undermine the development of evaluation culture in regional administration in the future (after the SFs are finished). The second weakness concerns application of uniform model of evaluation system for all regions. It limits the possibility of better adjustment of evaluation organisation to regional conditions, characteristics and needs, which is a precondition of development of evaluation capacity in any organisation. The research was financed by the Polish Ministry of Science and Higher Education. Keywords: Structural Funds; Evaluation capacity; Regions;

O 244

State of evaluation capacity development in Ukraine: Demand, Supply, Institutionalization


I. Kravchuk 1, A. Kalyta 2, I. Ozymok 3, K. Stoliarenko 4, L. Palyvoda 5, O. Schetinina 6, V. Tarnay 7, A. Goroshko 8, N. Khodko 9
1 2 3

National Academy of Public Administration Office of the President, Post-doctoral Scholar, Kyiv, Ukraine Science and Technology Centre in Ukraine, Program Performance Officer, Kyiv, Ukraine Permanent Office in Ukraine ADETEF public experts in economics and finance, Deputy Director, Kyiv, Ukraine 4 International Organization of Migration, M&E specialist, Kyiv, Ukraine 5 CCC Creative Center, President, Kyiv, Ukraine 6 Foundation for Development of Ukraine, Director of Analysis Planning and Evaluation Departmen, Kyiv, Ukraine 7 National Academy of Public Administration Office of the President, PhD applicant, Kyiv, Ukraine 8 Kyiv International Institute of Sociology, 9 Statistical Analyses, Building Capacity in Evidence-Based Economic Development Planning in Ukrainian Oblasts and Munisipalities, EBED

Authors: Anna Kalyta, Ph.D Program Performance Officer, Science and Technology Center in Ukraine Iryna Kravchuk Ph.D in public administration, Post-doctoral Scholar of National Academy of Public Administration, Office of the President Iryna Ozymok, Deputy Director, Permanent Office in Ukraine, ADETEF, public experts in economics and finance Kateryna Stoliarenko, M&E specialist International Organization of Migration Lyubov Palyvoda, PhD (Rutgers University, USA), President, CCC Creative Center, Independent Consultant
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

168

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

Olga Schetinina, Director of Analysis, Planning and Evaluation Department, Foundation for Development of Ukraine Volodymyr Tarnay, Ph.D applicant of National Academy of Public Administration, Office of the President At present, evaluation capacity in Ukraine has not been well developed. Evaluations and assessments are conducted infrequently, and the few carried out are mostly the result of demand by donor organizations. Thus, there is a need for development of the evaluation culture in Ukraine. But before, it is important to study current stage of the evaluation capacity development. This paper presents results of the comprehensive baseline study conducted by the Ukrainian Evaluation Society. The purpose of the study was to identify the stage of development of the evaluation capacity in Ukraine. The study focused on understanding existing demand and supply for evaluation as well as on identification of whether evaluation is institutionalized in government, international donor organizations and civil society and how. The study provides information to be used by different sectors in the process of defining strategies and specific initiatives aimed at promoting and institutionalization of the evaluation culture in Ukraine. We expect that this paper will be useful to decision-makers, evaluators, academicians that are interested and involved in development of the evaluation capacity in Ukraine as well as in the other countries.

S3-06

Thursday, 4 October, 2012

1 5 : 4 5 1 7 : 1 5

O 245

Patterns and determinants of evaluation capacity development: the case of two EU regions
M. Mura 1
1

University of Bristol, School of Sociology Politics and International Studies, Bristol, United Kingdom

This paper examines the demand of evaluation and its institutionalisation in two EU regions: Andalusia and Sardinia. In particular it attempts to explain why these two regions reached different outcomes in terms of evaluation capacity development, despite the various similarities they present in their culture, their society and their political structure. Andalusia is currently one of the regions mostly demanding evaluation in Spain, while in Sardinias demand is still quite limited. This work attempts to identify and compare the possible determinants that may explain this difference, by examining the context and the key factors and events that directly and indirectly have facilitated the evaluation capacity development in Andalusia and have hampered it in Sardinia. These factors are briefly outlined below. In Andalusia a strong propensity to innovate emerged at the outset of the autonomy in 1982, and resulted in the adoption of multi-year programmes and in the introduction of some monitoring and evaluation arrangements that although row, denoted of a significant interest in the implementation of policies and in learning their outcomes. The EU requirements, after Spain joined in 1986, helped to refine the emerging interest in evaluation. This is clearly demonstrated by the converging policy-making styles even in the non EU-funded policies, and also by the positive attitude of key actors. The presence of a strong interest group, consisting of academics, policy-makers and public sector executives, made possible to effectively connect research and policy-making so that each component constructively cooperate with the others. The close connection between the government and the society is the last important determinant of the Andalusias evaluation capacity. On the contrary, after twenty years since the introduction of the EU evaluation requirements Sardinia, despite the increasing awareness of the importance to evaluate, did not develop any evaluation capacity outside the EU funded programmes. This is shown by the diverging policy-making styles. A propensity to reform emerged only occasionally and it has been often hampered by political and administrative instability even within the same coalition, by a strong individualism of key actors and by the presence of powerful interest groups that still influence the policy-making process. In this context, the EU requirements exerted a scarce influence, as they are not always understood and are judged too complicated. The third determinant is the dependency from central government, ranging from the simple establishment of the regional evaluation unit to wider issues such as financial transfers. Finally, politics, academia and public administration seem to pursue their own priorities and only occasionally cooperate, usually only in case of strong personal relationships. Finally, the connection with the society is rather weak, as government and society seem to be scarcely linked. The paper concludes with some reflections upon the future prospects for evaluation capacity development in the two EU regions examined and offers some suggestions for future research in this area. Keywords: Evaluation capacity development; Interest groups; EU requirements; Innovation; Rational policy-making;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

169

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S3-07 Strand 3

Paper session

Capacity Development: Learning from experience I


S3-07
O 246

Use of Evaluation for Public Policies and Programs: Supporting Networks for National Evaluation Capacities
R. La Rovere 1, I. Naidoo 1, A. R. Soares 1, J. I. Uitto 1
1

UNDP, Evaluation Office, New York, USA

Thursday, 4 October, 2012

1 5 : 4 5 1 7 : 1 5

Use of evaluation for improving public policies and programs is a priority area receiving attention around the world. The ways of overcoming associated challenges are numerous, providing opportunities for exchange of experiences and learning from one country to another. To enhance this process, the Evaluation Office of UNDP hosts on a bi-annual basis international conferences on National Evaluation Capacities (NEC). The conference brings together in an open forum a wide range of stakeholders from developing countries to enabling them to reflect upon, articulate and share innovative experiences, promote understanding of international standards in evaluation, and enhance advocacy for evaluation in their countries. The focus of the second NEC conference was Use of Evaluation in Decision Making for Public Policies and Programs. The conference was organized jointly by the Evaluation Office of UNDP and the South African Public Service Commission, with the support of the Governments of Finland and Switzerland, and took place in Johannesburg in September 2011. This paper draws upon the conference, highlighting experiences from countries with different evaluation cultures that have made significant progress on the use of evaluation as a means to foster democratic governance and accountability. The paper also presents evidence provided by some country cases presentedincluding Benin, Malaysia, Mexico, Sri Lanka and Tanzaniaon relevance to policy-making. A recommendation identified by the participants pointed at the need to facilitate the networks and networking process of evaluation practitioners for strengthening national evaluation capacities. The paper discusses modalities that the participants recommended to be put in place to ensure follow-up and use of the conference materials and learning. A website ( http://web.undp.org/evaluation/workshop/nec/2011/index.html) was established by the Evaluation Office of UNDP to serve as a portal for information sharing and for communication for a community of practice. Also, a Twitter account was set up and used during the conference to allow interested parties to follow up and contribute to the discussions. Maintaining these tools has proved to be challenging, as well as their sustained and full use. One lesson learned is that follow-up actions should be taken in a systematic and timely fashion to ensure the sustainability of the effort. One key issue is translating the key documents into various national languages. Other proposed mechanisms include a post-conference reflection process in each country for various national actors to share their collective reflections, and exchanging experiences regionally. These measures are intended to facilitate South-South cooperation among countries that are strengthening their evaluation-related efforts. Keywords: National Evaluation Capacity; Policy; Programmes; Evaluation Use;

O 247

Metaevaluation a critical approach towards increasing evaluation capacity


R. Mihalache 1
1

Pluriconsult Ltd., Bucharest, Romania

In a context of quasi-general concern for the evaluation use metaevaluation tends to have a more important role in the evaluation practice. A label that defines a quality control and/or a learning process, metaevaluation ultimately has a contribution to increasing the evaluation capacity of those involved in designing, managing and carrying on the evaluation process. The methodology of metaevaluation was introduced by Michael Scriven and Daniel Stufflebeam in a mature evaluation culture. Metaevaluation has different implications and the practice reveals a variety of ways of doing it. The paper will briefly set the scene by presenting the theoretical aspects and some applications available in the literature, will look on what are the rules of the game in a developing evaluation culture and on how is it actually done (metaevaluation of what, done by whom, based on what criteria/standards, with what costs and what return of investment). Keywords: Metaevaluation; Quality control;

O 248

Program evaluation maturity in public sector


A. Pilkaite 1
1

ISM University of Management and Economics, Vilnius, Lithuania

Phd student paper. Session Evaluation ethics, capabilities and professionalism. Evaluation maturity reflects organizational capacities to perform evaluation. Evaluation capabilities are discussed by various authors and specified by European Commission in evaluation guide. The following capabilities emerge when discussing quality of evaluation and how an evaluation should be performed: where the evaluation demand sprang from; what are the evaluation aims; what resources for evaluation should be planned; which evaluation criteria should be used to meet the goals; how evaluation is organized at each stage and by whom performed, what evaluation skills are required and which are in place; is performance of evaluation put into certain procedures and
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

170

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

evaluation methods set; are the evaluation results and recommendations used for further policy development; is the good practice of evaluation and lessons learned identified and applied for the development of evaluation capacity? Evaluation capacity dimensions mentioned above constitutes categorization of maturity levels of the evaluation maturity model. Maturity is a reflection of organization capability to organize its functions. Evaluation maturity model suggested represents levels reflecting certain organizational capabilities for evaluation which correspond to organizational needs and mainly are cumulative or incremental from 1 to 5. The maturity levels in general might be described as follows: maturity level 1 initial or ad hoc (unpredictable process that is poorly controlled and reactive). 2 repeatable (process is characterized but often reactive). 3 defined or standardized (characterized process for the organization that is proactive). 4 managed (process measured and controlled). 5 optimized or continuous development (process improvement focus). In this paper the Evaluation Maturity Model (EMM) is proposed. Evaluation maturity level 1 reflects the organization which doesnt have deep knowledge on evaluation and evaluation itself is carried out on the top-down requirement basis, usually only budget for evaluation activity is allocated and no dedicated person for that function is appointed. In this case the evaluation is done only for reporting purposes, application of evaluation criteria is scarce, regular evaluation methods are not applied. The evaluation at the first level of EMM is not planned in advance and usually carried out by the program manager with limited administrative qualifications.

S3-07

Thursday, 4 October, 2012

1 5 : 4 5 1 7 : 1 5

At the other end, EMM maturity level 5 is described as the organizational status characterized by attributes as deep knowledge on evaluation, its methods, appliance of standardized procedures and forms, trained and qualified human resources undertaking evaluation, usage of various evaluation criteria in different stages of programs. The demand for evaluation comes from understanding of its added value, therefor evaluation is a key tool for continuous policy development and improvement of organizational performance including evaluation phase itself. As the evaluation is planned in advance resources, time and budget is foreseen and risks of evaluation identified, as well as all stakeholders of evaluation considered. Other maturity levels are in between this framework and are revealed in the paper. The added value of EMM is not only that it specifies what evaluation attributes correspond to which evaluation maturity levels, but as well suggests an instrument for measurement of evaluation maturity in organizations.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

171

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-09 Strand 2

Paper session

Evaluation in complex environments I


S2-09
O 252

Evaluation approaches, methods and data in a rapidly changing context Crisis Response evaluations at the World Bank Group
A. Kumar 1, A. Khadr 1
1

The World Bank Group, Independent Evaluation Group, 20433 Washington D.C., USA

Thursday, 4 October, 2012

1 7 : 3 0 1 9 : 3 0

Proposed by: Ali Khadr, Senior Manager, and Anjali Kumar, Lead Economist, Independent Evaluation Group, The World Bank Group As the 20089 global economic crisis has shown, key economic events can be sudden, unexpected and fast-evolving in character. To facilitate meaningful response, there is a need for evaluative lessons within a framework of time that is much more compressed than that underlying conventional ex post evaluation. Conventional evaluation typically relies on information based on a logical framework that is sequential in character, and awaits data on final outcomes and impacts. With the observed increase in volatility worldwide, the need for evaluation approaches, methods, and related data that are able to respond to the need for quicker assessment of situations as they unfold is likely to increase. This paper illustrates the way in which the World Bank Groups (WBGs) Independent Evaluation Group (IEG) was able to respond to this need for timely feedback in its two-phase evaluation of the WBG response to the global economic crisis, and identifies the associated approaches, methods and data that were used in the evaluation. The paper will also draw upon the literature and other recent examples of real time evaluation (often undertaken in the context of crises) to position IEGs initiative in this broader context. The paper will trace the course of the evaluative approach and materials drawn upon. The first-phase study investigated the capacity of the organizational structure to respond to crisis. It also undertook an analysis of the quality-at-entry of select WBG financing operations, assessing in particular the quality of their Results Frameworks. Finally, it included a preliminary review of the correlation of WBG support with the extent of GDP decline in client countries. The second phase of the evaluation extended work in both the latter areas: a finer analysis of patterns of distribution of WBG assistance, and a closer review of the design, and early outcomes, of a large section of the WBGs portfolio of operations. Of particular interest to the theme of this conferencethe changing nature of evaluation in a networked society and the implied enhanced information flowsis the extent to which high-frequency data on a range of macroeconomic and financial variables, available much more rapidly today than before, was drawn upon in this study. The paper will also draw attention to those spheres of economic activity that are currently less amenable to such analyses owing to the lack of such high frequency data (e.g., in areas of labor market analysis such as unemployment and social benefits). Nevertheless, many advanced economies are building their own high frequency data on a number of variables relevant to labor markets and employment, and these could be used in specific country case studies. More generally, as the theme of the proposed conference is the networked society, new technologies and information flows in the context of evaluation, we believe that the paper would be of core relevance. In addition, IEG would benefit from learning from other practitioners on other approaches that can be used to make creative use of data and information for quick-response evaluation. Keywords: Real-Time evaluation; Crisis response; International; Global; High-frequency data;

O 253

Norm, Mistake, or Exemplar? A Complexity Approach IFPRI-PROGRESA in Mexico


W. Faulkner 1
1

Tulane University, New Orleans, USA

In 1997, during the early stages of implementing their new flagship anti-poverty program Progresa (now Oportunidades), Mexican officials contracted an evaluation team from the International Food and Policy Research Institute (IFPRI). The research which came out of the IFPRI-Progresa evaluation project had widespread influence, not only sustaining Progresa through Mexicos momentous political transformation in 2000, but also helping to legitimate Conditional Cash Transfers as anti-poverty tools and expand the usage of experimental designs in evaluation, both trends still very much alive in the present. This paper examines the generalizability of the IFPRI-Progresa evaluation process using an approach informed by systems thinking. The analysis examines this aspect of the research project from three (hypothetical) perspectives. Each perspective reveals a distinct narrative, casting a different light on the process as documented, revealing that even in seemingly objective, quantitative studies, validity and legitimacy are subjectively constructed and interpreted qualities. Demonstrating the importance of perspective when assessing these qualities reveals the shortcomings of the single-narrative paradigm (how well did it work??) so strong in both evaluation and policy. The author asserts that systems thinking, through the elaboration of multiple perspectives, can bring multiple stakeholders together and has potential to move the evaluation community beyond some seemingly inimical methodological debates. Appraising IFPRI-Progresas sampling procedure with a complexity approach thus underlines the need for the evaluators and policy-makers involved in social policy to negotiate between narratives and consider multiple worldviews in their decision-making. Keywords: Complexity; Conditional Cash Transfers; PROGRESA/Oportunidades; Systems Thinking; Randomized Controlled Trials;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

172

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 254

Development Evaluation in Kenya: Beneficiary Rights and Participation


L. M. Gaithi 1
1

Ministry of State for Planning and National Development and Vision 2030, Monitoring and Evaluation Directorate, Nairobi Kenya, Kenya

S2-09

Evaluations are effectively intended to inform if development interventions are successful. Evaluations also inform us the interventions in form of policies/programs/projects that have worked in certain environments. They inform improvement in designing of policies and interventions that make a difference in the lives of people. As such replication or slight variation of such interventions may be used to benefit other areas with similar problems/challenges. Evaluations of development achievements/results provide evidence that enhance decisionmaking, learning, accountability and impact. This paper seeks to investigate the rights and participation of beneficiaries in development evaluation. The paper will present the case of Kenya, in view of the implementation of a new constitution that emphasizes participation and respect for rights. A review of the beneficiary general interests in development interventions will be presented as a precursor to their rights and responsibilities in the subsequent evaluation. The paper will present a review of the beneficiary rights and participation as pertains to development evaluation prior to the promulgation of the new constitution. In addition to the translation and actualization of the presented rights and responsibilities, good practices and lessons learnt will be discussed. The envisaged advantages of beneficiary participation, through exercising rights and undertaking their responsibilities, in development evaluation will be outlined. Beneficiary participation in development evaluations is highlighted as an important element in partly ensuring the integrity of the evaluation process and the credibility of the evaluation. It is also noted that it is important that at the end of the evaluation, the main findings and recommendations be communicated to beneficiary. Finally, the paper will present an assessment of key challenges that are likely to be encountered in beneficiary participation in development evaluation. Potential ways to address the identified challenges and to strengthen the beneficiary participation in development evaluation will also explored. Keywords: Development evaluation in Kenya;

Thursday, 4 October, 2012

1 7 : 3 0 1 9 : 3 0

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

173

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S3-28 Strand 3

Panel

The Future of Evaluation S3-28 (and What We Should Do About It)


O 255

The Future of Evaluation (and What We Should Do About It)


J. Gargani 1, S. Donaldson 2, P. Dahler-Larsen 3, C. Segerholm4, B. Windau 5
1

Thursday, 4 October, 2012

1 7 : 3 0 1 9 : 3 0

Gargani + Company, Berkeley, USA Claremont Graduate University, School of Behavioral and Organizational Sciences, Claremont, USA 3 University of Southern Denmark, Department of Political Science and Public Management, Odense, Denmark 4 Mid Sweden University, Department of Education, Hrnsand, Sweden 5 Bertelsmann Foundation, Gtersloh, Germany
2

Evaluation is changing. Powerful new forcessuch as social media, impact bonds, mobile devices, virtuous entrepreneurs, mega-philanthropy, big data, and social impact investingare exerting tremendous influence on a field that, by many accounts, is still maturing. Much of this change is being driven, directly and indirectly, by technology that connects people, collects data, and compels us to adapt. This panel session will focus on three questions. First, given the nature of these forces, how do we imagine the field of evaluation will look in 10 years? Second, are these imagined futures desirable? Third, how should the field respond? Panelists will make brief presentations (710 minutes) to frame a discussion between panelists and those in attendance. The panelists include: Dr. John Gargani (Gargani + Company, United States, Session Chair) Dr. Stewart Donaldson (Claremont Graduate University, United States) Dr. Peter Dahler-Larsen (University of Southern Denmark, Denmark) Dr. Christina Segerholm (Mid Sweden University, Sweden) Bettina Windau (Bertelsmann Foundation, Germany). Collectively, they bring with them a rich and varied expertise in evaluation practice, evaluator training, research on evaluation, and evaluation use. At the conclusion of the session, attendees will be asked to vote on the likelihood and desirability of the predictions. What we learn at the EES conference will inform a companion session that will take place later in the month at the American Evaluation Association (AEA) Conference. The culmination of our efforts will be the publication of our predictions and a discussion of what we should do about them. Evaluation has a long history of predicting the future. Several notable efforts include the The Next Decade in Evaluation Research (Freeman & Solomon, 1979) which appeared in Evaluation and Program Planning; the 1994 and 2001 special editions (Vol. 15, No. 3; Vol. 22, No. 3) and a 2011 special section (Vol. 32, No. 4) of the American Journal of Evaluation; and the edited volume Evaluating Social Programs and Problems: Visions for the New Millennium (Donaldson & Scriven, 2001). The predictions presented in these and other previous works were made by leaders in the field, reflecting a mix of what they believed was likely and what they hoped would happen. This body of work has contributed to the development of evaluation by stimulating thinking, spurring debate, and motivating action. What we propose is different in four important ways. First, in advance of the session we will ask practicing evaluators to suggest predictions, so the predictions we consider will not be generated solely by leaders in the field. Second, we will include international as well as US voices by holding similar sessions at the EES and AEA conferences. Third, we will ask evaluators to vote on the likelihood that the predictions will come to pass and how desirable they would be for the field of evaluation. Fourth, we will collect our predictions and place them in a time capsule to be opened in 10 years, at which time we (or perhaps others) will reflect on our imagined and real future. Keywords: Future of Evaluation; Field of Evaluation; Value of Evaluation; Technological Change; Strategic Planning;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

174

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S1-16 Strand 1

Paper session

Social networking, network associations S1-16 and evaluation I


O 256

Does social networking improve local development: to what extent and under what conditions? Empirical evidences from the Canadian context
Thursday, 4 October, 2012
1 7 : 3 0 1 9 : 3 0
M. C. Jean 1, M. Lamari 1
1

cole nationale dadministration publique, Quebec, Canada

Marie-Claude Jean is a Ph.D candidate and a program evaluator at Centre de recherche et dexpertise (CREXE), based at cole nationale dadministration publique (ENAP), Universit du Qubec. Moktar Lamari is a Professor at ENAP (www.enap.ca) and CREXEs director. Social networking is a fundamental factor of local development, economic sustainability, prosperity, and innovation. It is also a crucial determinant of social entrepreneurship, and local well-being. Assessing and measuring the social networkings components, in order to fully understand the leverage it provides to local development, is a real challenge to evaluators. Program evaluation literature doesnt provide a satisfactory answer to our question: Does social networking add value to local development: to what extent and under what conditions? Our paper combine quantitative and qualitative data collected in the Canadian province of Quebec in 2011. We have conducted a Web survey (N = 1436) and interviews (N = 25) to obtain the information related to networking, its determinants and impacts. Our contribution has three objectives: i) identifying the determinants of social networking in the context of local development; ii) assessing the impacts of social networking on social entrepreneurship and local development (investments, well-being, collaboration, etc.); iii) elaborating concepts and indicators that can be used to assess the full extent of social networking on local development. We examine social networkings impacts on diverse types of capitals development (financial, technological, human, social, natural, etc.). The strongest impacts we found are related to social capital and the improvement of social living conditions. We also measure the impact of social networking on 32 items related to local development. Then, we proceed to a factor analysis which allows the identification of four main axes of impacts. Axis 1: Living environment explained 23 % of the variance. Axis 2: Collective well-being explained 23 % of the variance. Axis 3: Economic structure and employment explained 18% of the variance. Axis 4: Formation and education explained 13 % of the variance. Our results suggest that social networking is pluralistic in nature and found in different domains: health, education, social services, economics, environment, culture, etc. This study demonstrate that local communities lacking active involvement in social networking are less inclined to enhance their social entrepreneurship and are unable, without exogenous help, to innovate in their locality. Furthermore, it appears that social networking reduces socio-political transaction costs between local organizations, political institutions and policy decision makers (government, Parliament, lobbyist, etc.). This paper will allow evaluators to be acquainted with diverse concepts and measures related to social networking in a local development context. Knowledge and new evidences developed from this article could be helpful to evaluators involved in the local development (in other geographical environment), stakeholders and decision makers who are dealing with social networking in a local development context. Keywords: Social networking; Local development; Social entrepreneurship;

O 257

Adapting evaluation to tap the power of social media


L. Morra Imas 1
1

International Program for Development Evaluation Training (IPDET), Ottawa ON, Canada

The International Program for Development Evaluation Training (IPDET) is a heavily evaluated training program. Having completed its 12th year, the program has long walked the talk and commissions annual externally conducted evaluations and impact evaluations every 5 years. All are posted on its website for prospective participants and others to view. Annual evaluations have included a pre and post knowledge test and extensive and multiple questionnaires to participants. The program has a Facebook site and a Twitter account but both are used primarily by IPDET staff to send out messages and are not highly active at this point. Recently another group on LinkedIn had a member, who was considering attending IPDET, pose the question of whether others would recommend the program. Those of us managing IPDET were newly in the position of observing the spontaneous responses to the query. We were struck by the fact that this is how many potential participants are now choosing to get their information not from program websites or formal evaluations even when they exist and are easily accessed. This paper considers the implications of this trend on several dimensions. For example, it looks at the ethical issues should we respond as the program managers or have a response made for us? The paper considers technical issues such as who is responding to the query and how this group compares to IPDETs participant profile. It also explores the substantive issue of how our formal training evaluation might adapt to this different way that potential participants get evaluative information about the program. Keywords: Social media; Evaluation of training; Evaluation through social media; LinkedIn;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

175

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 258

Lessons from applying social media and on-line engagement at the World Bank Groups Independent Evaluation Group
B. Salimova 1, A. McKenzie 1
1

The World Bank/ Independent Evaluation Group, Independent Evaluation Group, Washington D.C., USA

S1-16

In the interconnected world where information flows faster and from more directions than ever before, it is challenging to assess what impact information and knowledge have, whether they reach the intended audiences, and how to measure the overall impact of shared information. It is even more challenging to stay relevant to stakeholders with the dynamics in online information consumption shifting from regularly accessing information in one place to multi-source knowledge-sharing and learning opportunities. The proposed paper will take stock and analyze the implications of the systematic application of social media and on-line engagement approaches on the process and impact of evaluative work produced by World Bank Groups Independent Evaluation Group (IEG). The paper builds on an earlier research work and highlights the evolution and lessons learned from applying social media engagement in IEG. The paper also incorporates improvements and expansions in data collection and understanding online user behavior.

Thursday, 4 October, 2012

1 7 : 3 0 1 9 : 3 0

In late 2009 early 2010, IEG adopted a communication framework that builds on multidimensional relationships with online users focused on knowledge-sharing practices and participatory approaches. This framework aimed to seek greater use of IEGs findings, recommendations, and lessons as well as to position IEG as a knowledge hub and a source of information on development effectiveness. More recently, the model has expanded to incorporate upstream engagement with public at large through calls for feedback and knowledge-sharing among online users to contribute to IEGs evaluations. IEG also introduced a more systematic sharing of information about IEGs ongoing evaluations, including official missions, evaluation methodologies, and key evaluation questions, among stakeholders who otherwise may have not have had access to it. This approach also falls in-line with the movement toward more openness and transparency that the international development community is starting to pay much greater attention to. To make this framework effective and efficient, IEG also refined its data collection approaches to obtain better metrics and established more specific and detailed online engagement plans with evaluation teams. Keywords: Social media; Online engagement; Participatory evaluation; World bank; Independent Evaluation Group;

O 259

The role of National Evaluation Societies in developing national M&E systems. Evidence from a survey in 20 aid-dependent African countries
N. Holvoet 1, S. Dewachter 1
1

University of Antwerp, Institute of Development Policy and Management, Antwerp, Belgium

The Paris Declaration (PD) and in its wake the Accra (and Busan) agenda have given a renewed impetus to monitoring and evaluation (M&E) while simultaneously also unfolding an ambitious reform agenda. In short, from partner countries it is expected that they establish M&E arrangements/systems that allow satisfying accountability and learning needs. Donors are expected to help partner countries in building and strengthening national M&E arrangements and rely as much as possible on these systems or at least harmonise with other donors. While the importance of developing national M&E capacity and use is widely acknowledged, there seems to be little strategic engagement in this area, even among those aid agencies which have it in their mandates. One of the actors that have so far largely been neglected in this context are National Evaluation Societies (NES). This is somehow surprising as evaluation societies regroup much of the nationally available M&E expertise and as such they can play a crucial role in strengthening nationally owned and localised M&E practice and use. Moreover evaluation societies are made up of members of different sectors (government, universities, civil society, private sector,) and precisely because of this mix of different key positions and roles in learning and accountability processes, evaluation societies provide those different actors a platform to interact, exchange information/ views/ opinions, forging networks or alliances. It is this networking among supply and demand of M&E that might trigger an increased use and influence of M&E outputs. Also in academic literature, the topic of national evaluation societies and their unique potential for fostering alliances of change across members pertaining to different institutional settings has so far remained largely unexplored. The current article is a first step in filling the gap. It draws upon evidence from a recent survey among evaluation societies in the 20 African countries that are involved in PD related processes. On the basis of the survey findings we provide a mapping of evaluation societies in these aid-dependent countries and we zoom into their composition. We focus on their own perceived contribution towards strengthening national M&E supply and demand, and more particularly on their involvement in developing M&E capacity and improving evaluation practice as well as their leverage in triggering use of M&E for policy and government accountability. Obstacles as well as opportunities for further strengthening and sustainability of national evaluation networks are analysed and discussed. In doing this, we also look at the increased interest for M&E among governments and donors and the degree to which this is effectively translated into support for NES. Keywords: National Evaluation Society; Aid; M&E system; M&E use; M&E capacity development;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

176

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-29 Strand 2

Paper session

Predicting outcomes
S2-29
O 260

Nonexperimental Comparison Group Methods: Predicting Outcomes from Various Degrees of Program Exposure: Examples from India, Burundi and Canada.
H. Cummings 1

Thursday, 4 October, 2012

1 7 : 3 0 1 9 : 3 0

University of Guelph, Environmental Design and Rural Development, Guelph, Canada

Program evaluators are constantly faced with the questions: What would happen without the program? How do you know the observed changes were caused by the program? How will you construct a comparison group or control group? How will you attribute the observed change to the program? The classic experimental design involves the assignment of participants to treatment groups and non treatment groups using random assignment. However, most program evaluators do not have the opportunity to randomly assign participants. Ethical concerns may make it impossible to randomly assign. Program and project delivery staff have little incentive to construct control or comparison groups. They focus on recruiting program participants and that may be difficult in itself. A variety of approaches have been taken to developing comparison groups. In some cases random assignment or random selection permits the use of a pure experimental design. However, in most cases, the evaluator constructs the comparison group by: selecting a geographical region that does not receive the program, selecting an agency that does not deliver the program, selecting an agency that delivers an alternative program and comparing participants in the two areas, and other methods. One of the interesting alternatives is what is sometime referred to as a dosage response model. Using this approach, a separate comparison group (a separate area, a different agency) etc. does not have to be selected. Alternately, evaluators collect extensive data on the participation of beneficiaries in the program. The degree of participation can be referred to as the dosage. Programme theory would suggest that higher levels of participation should results in better outcomes than lower levels of participation, other things being equal. I.e., an individual that attends one training session for a health promotion program should gain fewer benefits than one who attends six sessions and may serve as a comparison group member. There are several advantages to this approach to comparison group design. Tracking degree of exposure and relating it to outcomes may help us improve efficiency and effectiveness of our programs. The concept of marginal rate of return for investment in additional programming comes to mind. Using this approach there is no need to exclude individuals artificially from a program, an identical geographic region does not have to be found and we do not have to expend additional resources to collect and analyze separate comparison group data. The author has used this approach in a variety of health related program evaluations internationally and found the results to be informative. The results will be shared and application of the method will be discussed. See for example: realworldevaluation.org%2Fuploads%2FAlternative_approaches_to_the_counterfactual_AEA_09.doc C.G. Victoria et al. Evaluation of large-scale health programs. In Global Health: Diseases, Programs, Systems and Policy 3rd Edition. Merson, Black and Mills (Editors), 2012. Keywords: Evaluation design; Comparison groups; Surveys; International development;

O 261

Performance Story Report: an alternative tool to report on the contribution of a programme to expected outcomes
H. Brozaitis 1
1

Public Policy and Management Institute, Vilnius, Lithuania

Evidence-based policies in a fast changing socio-economic environment (especially as experienced from 2008 onwards) require timely, informative inputs, indications as to whether the selected interventions are working as expected and delivering the planned outcomes. The pressing need among the public managers to understand and report on the performance of a programme internally (to the senior management) and externally (to external stakeholders and public at large) blurs the line between monitoring and evaluation. Despite the limitless quantity, speed and accessibility of information generated by new technologies, a typical programme which aims to support the development of a public policy is still struggling to come up with sufficient meaningful evidence, which complicates it performance monitoring and reporting. Our experience is based on performance monitoring of the EU programme for Employment and Social Solidarity PROGRESS 20072013, which is a financial instrument supporting the development and coordination of EU policy in the five closely related policy areas (Employment, Social inclusion, Working Conditions, Anti-Discrimination and Gender Equality). Two key challenges to assess and report on the performance of the programme were: The nature of the programme: PROGRESS produces few tangible results as it targets policy-making process at the EU and Member State levels and aims to induce change through influence to policy process (both at the EU level and in the Member States). This contrasts more traditional targets of public programmes, such of achieving change in some specific aspect of socio-economic situation through direct provision of services or funding. In practice that required careful operationalisation of the performance measures (that is, what and how will be measured when assessing them) as well as extensive use of proxy lead and indicators.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

177

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

The scope of the programme: PROGRESS has a complex, intertwining structure of its objectives and intervention areas as well as variety of intervention mechanisms. Coupled with the rather limited budget of the programme (compared to the scope of its objectives), this meant that tracking, measuring of and reporting on the achievements of the programme required a careful understanding of the overall context (contextualised process tracing). As a result, the performance reporting on the programme was a balanced mix between quantitative information and the approach called performance story reporting (as proposed by John Mayne in Reporting on outcomes: setting performance expectations and telling performance stories// The Canadian Journal of Program Evaluation, Vol.19 No.1, 2004, pp. 3160). A performance story report is essentially a concise report about how a program contributed to outcomes. It aims to strike a good balance between depth of information and brevity. It is easy for staff and stakeholders to understand and it helps build a credible case about the contribution that a programme has made towards outcomes or targets. It is supported by multiple lines of quantitative and qualitative evidence and describes the causal links that show how the achievements were accomplished. The resulting annual PROGRESS performance monitoring reports received positive reaction from management and external stakeholders. Keywords: Performance monitoring; External accountability; Stakeholder involvement;

S2-29

Thursday, 4 October, 2012

1 7 : 3 0 1 9 : 3 0

O 262

Future-oriented impact assessment as a strategic management approach for public R&D-programmes


A. Pelkonen 1, K. Hyytinen 1, T. Loikkanen 1
1

VTT Technical Research Centre of Finland, Espoo, Finland

Impact assessment related to research and development activities is currently facing important challenges. First, the nature of innovation is changing towards systemic and open models which have not properly been taken into account in current evaluation practices. Second, innovation policy is transforming into an expanding, horizontal and network-based policy field which calls for broader and more diversified approaches in the evaluation of its impacts. Third, current evaluation practices tend to be dominated by backward-looking and legitimising approaches while there is less connections to forward-looking and learning-oriented perspectives in evaluation. In particular, the increasingly fast pace of societal change and technological development calls for closer linkages between impact assessment and forward-looking activities in the management of public R&D programmes. This paper introduces and develops a systemic and future-oriented evaluation approach that can be applied in the strategic management of public R&D programmes and organisations. The approach is an integrating method designed to meet the challenges of the changing innovation environment and evaluation standards. Future-oriented impact assessment approach integrates different R&D evaluation methods, particular forward-looking activities (e.g. technology roadmapping) and tools to enhance interaction and common learning under a single framework in order to provide versatile information to support the steering and decision-making of the R&D-instrument in question. The paper is based on a broad research and development work where the presented approach has been theoretically developed and empirically piloted. The approach has been developed in the context of three technology and innovation programmes of Tekes, the Finnish Funding Agency for Technology and Innovation. The paper is hoped to benefit R&D evaluation researchers, as well as, programme managers and R&D evaluation practitioners. Keywords: Impact assessment; Foresight; Programme management; Innovation policy; Research and development;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

178

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S4-31 Strand 4

Panel

Meta-Evaluations aggregated analysis to enhance S4-31 learning in development cooperation


O 263

Meta-Evaluations aggregated analysis to enhance learning in development cooperation


Thursday, 4 October, 2012
1 7 : 3 0 1 9 : 3 0
S. Krapp 1, R. Stockmann 2, S. Silvestrini 3, A. Caspari4, M. Rickli 5
1 2

Head of Department, German Evaluation Institute CEval 3 CEval Consult 4 University of Applied Sciences, Frankfurt 5 Evaluation Department, Swiss Agency for Development and Cooperation

Meta-evaluations are not very much applied in development cooperation. This is a surprising fact, because they mark an important tool for cumulating knowledge and generating additional findings of individual evaluations not only regarding the respective sector or region but also concerning evaluation methodologies. These advantages, the value added but also the problems in designing and implementing meta-evaluations will be demonstrated by the example of a meta-evaluation which has been applied in the vocational education and training sector (VET). In 2011 GIZ, on behalf of the Federal Ministry for Economic Cooperation and Development (BMZ), has decided to undertake an evaluation synthesis and meta-evaluation of individual evaluations in this sector, implemented by the GIZ predecessor organizations (GTZ, DED and InWEnt). The objectives of this analysis were on the one hand the synopsis of the results of the individual evaluations with regards to the DAC-criteria and the cross-cutting issues of German Development Cooperation (i.e. poverty reduction, gender, environmental issues), the capacity development effects of the programs, their compliance with the overall concept of Sustainable Development, the analysis of the evaluation methods (design, instruments, impact attribution) and the identification of success factors for vocational training programs. On the other hand the results of this analysis should be compared with the results from the 1990s and the respective relevant sector strategies. By this comparison trends and patterns should get revealed, particularly for the identification of recurring strengths and weaknesses of the VET projects. Especially regarding the latter aspect this meta-evaluation is unique at least for German technical cooperation! Not only current program evaluations were analysed regarding the above mentioned criteria and issues but also compared with results of evaluations of the 1990s by considering the current relevant sector strategies and papers. Furthermore, the data basis of the analysis is broad: Altogether 12 evaluation reports of former GTZ-programs and further 13 evaluation reports (including two meta-evaluations) from former DED- and InWEnt-programs have been analysed. And last but not least, the methodology is innovative. Based on the Grounded Theory (Glaser, Barney; Strauss Anselm L.: 1967/1998) a multiple analysis procedure has been applied: Development of an analysis framework, text analysis, a three-stage coding process (deductive, axial, selective), deduction of conclusions and lesson learnt (inductive categorization). The round table will be structured as follows: In the first part the research design and the implementation of the meta-evaluation will be presented followed by some selected results, incl. the findings of the methodological analysis of the individual evaluations regarding their design, instruments, and impact attribution. Specifically the developed Balance Scorecard for key success factors of sustainable development cooperation in the field of VET will be presented. In the second part, two discussants will reply to the presented aspects taking into consideration their experiences with the meta-evaluations they carried out just recently, so comparisons will be possible: a meta-evaluation of German Human Capacity Development program evaluations and a meta-evaluation of Swiss Vocational Education and Training programs. Finally, the audience will have the opportunity to discuss with the presenters and the discussants methodological difficulties, necessary preconditions, and the necessity and functions of meta-evaluations with regard to institutional learning and enhancing program evaluations. Presenters Bios Prof. Dr. Reinhard Stockmann Prof. Dr. Reinhard Stockmann works for about 30 years in the field of theoretical and methodological evaluation research. He has conducted more than 200 evaluation studies, particularly in the fields of development cooperation, vocational education and environment and developed a range of comprehensive evaluation training programs. Furthermore he developed a large number of monitoring- and evaluation systems and published about 30 books on evaluation, some of which have been translated in several languages (e.g. English, Spanish, Russian, Chinese). He founded in 2002 the Center for Evaluation at the Saarland University which he directs since. He is cofounder of the German speaking evaluation association (DeGEval). Moreover he is founder and the editor of the German speaking Journal for Evaluation (ZfEv) and of the edition Sozialwissenschaftliche Evaluationsforschung (socio-scientific evaluation research) at the Waxmann publishing house. Besides a number of training programs he developed the first European master study course Master of Evaluation in cooperation with the Saarland University and the University of Applied Sciences of the Saarland, which has been introduced in 2004. Dr. Stefan Silvestrini Dr. Stefan Silvestrini is the CEO of the CEval Consult GmbH which he founded together with Prof. Stockmann in 2011. He is working since 12 years in the field of evaluation research, primarily in the context of development cooperation, vocational education, labor market and health. Mr. Silvestrini has collaborated at a number of publications and research papers on evaluation methodology and has a strong theoretical and methodological background, particularly on quasi-experimental evaluation designs, qualitative and quantitative data collection and analysis and technology assessment. Furthermore as a consultant he is widely experienced in developing and implementing evaluations and monitoring systems. Furthermore he has conducted a number of evaluation trainings and coaching measures, amongst others for the Worldbank, the Austrian Development Cooperation or the GIZ.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

179

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S4-31

Chairs Bio Dr. Stefanie Krapp Sociologist; employments as Assistant Researcher at the Department of Sociology at the University of Koblenz-Landau, as free-lance consultant for German development projects mainly in Egypt and South East Asia developing and implementing M&E-Systems and carrying out impact evaluations, and as Assistant Researcher at the Center for Evaluation at Saarland University focused on the evaluation of projects in the fields of education, vocational education and international cooperation and on developing and conducting trainings in evaluation; here she also received her PhD in Sociology; for one and a half years she advised the German Development Service in Labour Market and Vocational Education Research in Laos (200607); after that she was an Integrated Expert at the University of Costa Rica in M&E for CIM-gtz, a German development organization (20082010); since April 2010 she is a Senior Evaluation Officer at giz head quarter in Germany. Keywords: Meta-Evaluation; Capacity development; Vocational Education and Training; M&E; Impact attribution; Policy use; Learning from evaluations;

Thursday, 4 October, 2012

1 7 : 3 0 1 9 : 3 0

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

180

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S4-09 Strand 4

Paper session

Evaluating environmental and social impacts


S4-09
O 264

Geological disposal of radioactive waste as a megaproject: a survey of potential methodologies for socio-economic evaluation
M. Lehtonen 1
1

University of Sussex, Science and Technology Policy Research (SPRU), Brighton, United Kingdom

Thursday, 4 October, 2012

1 7 : 3 0 1 9 : 3 0

Long-term geological disposal of high-level radioactive waste has not yet been implemented anywhere in the world. However, a number of countries have advanced plans for such disposal. As part of efforts to introduce greater reflexivity into its operations The French national radioactive waste management agency, Andra is seeking advice on methods and approaches for the socio-economic evaluation of its disposal project. This paper presents the results of the first part of a research project aimed at designing a framework for socio-economic evaluation of radioactive waste disposal in France. Long-term geological disposal of radioactive waste is an extreme example of a megaproject (e.g. Flyvbjerg 2007) characterised by the multiplicity of temporal and spatial scales involved; continuous evolution and dynamism owing to the uniqueness of the project; the complexity of the causal relationships; high degree of scientific, political and institutional uncertainties; and a great likelihood of normative disagreements among parties involved (e.g. Altshuler & Luberoff 2003 ; Flyvbjerg et al. 2003; Priemus & Flyvbjerg 2007). Furthermore, because of the extremely long time scales involved the governance structures and the institutional framework are certain to undergo fundamental changes during the lifetime of the disposal project. Uncertainties involved in megaprojects are usually perceived as problematic, insofar as they tend to accentuate the risk of chronic overestimation of the benefits and underestimation of the costs and timescales for the realisation of the project (par ex. Flyvbjerg 2007, 1213). These uncertainties are particularly relevant from the perspective of accountability, whereas the positive uncertainties have received less attention. Such uncertainties might enable iterative reorientation of the project in line with changing context (e.g. changes in the role of nuclear industry and citizen attitudes), technological progress, and expectations of the parties involved, thereby fostering social learning, reflexivity, reversibility, and revision of dominant modes of thinking and earlier decisions, in the spirit of adaptive governance. A challenge for the evaluation of the disposal project is to combine the objectives of accountability and social learning (see e.g. Lehtonen 2005). This paper presents first part of the research project, based on literature survey and stakeholder interviews in France, which sought to: 1) identify the characteristics relevant to socio-economic evaluation and specific to the French disposal project as an example of a megaproject; 2) identify the repertoires (van der Meer 1999) of the various participants concerning socio-economic evaluation of the project; 3) place the repertoires within the broader governance context of the disposal project; and 4) outline the key methodological challenges for socio-economic evaluation of the project. Particular attention is given to the following aspects of the method/approach: its applicability to the evaluation of megaprojects; adherence to the principle of plural and conditional expertise (e.g. Stirling 2010; Sderbaum 2001); multidisciplinarity and integration of types of knowledge; and social learning. Key challenges concern the meaning of the socio-economic; the temporal dimension (ex ante, ex nunc, and ex post evaluation); the purpose of evaluation; the use and influence of evaluation; and the role of the evaluation process as a source of learning. Keywords: Megaprojects; Socio-economic evaluation; Accountability; Learning; Radioactive waste disposal;

O 265

Evaluating environmental and social effects in International Finance Corporation projects


J. Eerikainen 1
1

International Finance Corporation Independent Evaluation Group, Washington DC, USA

Bio: Mr. Eerikainen (Ms.SC. Chem. Eng) evaluates environmental and social effects of IFC and MIGA projects. He has contributed to thematic evaluations on environment, WBG safeguard policies and climate change. Before joining IEG in 2004, he worked as Senior Environmental Evaluation Manager for EBRD and in consulting and chemical industries. Rationale: Environmental and social (E&S) sustainability is a strategic pillar of many multilateral development banks (MDBs) investing in private sector. The projects may encompass a wide range of E&S risks and opportunities; pollution control, occupational health and safety, protection of biodiversity, global aspects especially climate change, and social aspects including community engagement, land acquisition, involuntary resettlement, indigenous people and cultural heritage, which clients should manage with their social and environmental management systems. In their accountability and learning functions, MDBs evaluation organizations evaluate the performance of MDBs investment projects as ex-post evaluations. The Independent Evaluation Group (IEG) evaluates the World Bank Groups (WBGs) projects and reports to its Board of Directors on evaluation outcomes. IEG Private Sector Evaluation (IEGPE) has developed a robust E&S evaluation methodology for International Finance Corporations (IFCs) project. This presentation analyzes the challenges IEGPE has encountered in evaluating E&S performance and impacts in IFCs various industry sectors and methodology to identify and benchmark performance indicators. Narrative: Project level evaluation at IEGPE is based on Expanded Project Supervision Report (XPSR) that project teams prepare after five years of operational maturity of the project for IEGPE validation; one of the Development Outcome indicators is Environmental and Social Effects (ESE) that is validated by the IEGPEs environmental specialist. The ESE indicator comprises the projects environmental and social
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

181

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

performance in meeting IFCs requirements (IFCs Policy and Performance Standards and WBG safeguard policies and guidelines), and the projects actual environmental and social impacts. IEGPE has sourced its E&S evaluation approach from the International Standard ISO 14031 Environmental Performance Evaluation that describes a process model for Environmental Performance Evaluation (EPE) and helps identify performance indicators and communicate the evaluation results. The ESE evaluation is based on desk reviews, discussion with implementers and site visits.

S4-09

IEGPE has chosen a rigorous approach to identify the performance indicators that comprise a standard set and tailored indicators, depending on the projects objectives and risk profile. Each performance indicator is then benchmarked against IFCs Performance Standards, local E&S laws and industry best practice. After rating each indicator on four point rating scale, a matrix on performance indicators is constructed and an overall rating for projects ESE is given. IEGPE evaluation of Environmental and Social Effects faces several challenges; lack of baseline and adequate monitoring information and difficulties to assess projects wider E&S impacts, as well as IFCs leverage in financial intermediary and corporate investments. The challenges, as well as an approach to assess the impacts as a change of performance indicators rating at the time of project evaluation and appraisal are presented in the session.

Thursday, 4 October, 2012

1 7 : 3 0 1 9 : 3 0

O 266

Evaluating Environment in International Development: Contributing to National Results beyond Projects


J. Uitto 1
1

UNDP, Evaluation Office, New York, USA

Questions related to environment and development remain topical 20 years after the Rio Earth Summit. Both national and international actors in governmental and nongovernmental fields, as well as in academia, are searching for insights into how sustainable development can be advanced and environmental concerns incorporated into the development agenda more effectively. There is an environment-poverty nexus at the heart of sustainable development that is still often neglected in development endeavours. Environment as a global public good tends to get short shrift, as in the short term it is seen as an externality and there are perceived trade-offs between economic development and environmental protection. While development programmes often ignore the environment, environmental programs also tend to operate in isolation. Building upon two global thematic evaluations of UNDP work in environmental management, energy and poverty linkages, as well as other evaluative evidence from the country level, the paper argues for integrated approaches that recognize that environment and development are intrinsically interlinked. At the organizational level, this requires cooperation and joint programming between units dealing with issues, such as poverty reduction, democratic governance, crisis prevention, and environment. To do so, institutional incentives must be put in place and it is important to develop indicators that track progress in the area at the organizational, programmatic and individual levels. At the country-level, there is a need for advocacy to promote such integration. Environmental concerns should be brought to feature in national development strategies. Evaluations have also documented successful cases that can be disseminated and provide lessons for other countries. Evaluation should and can move beyond assessing individual projects in isolation and contribute to the understanding of how environmental concerns can be better incorporated into development efforts in the national context. The paper will draw on work of the Evaluation Office to illustrate the actual and potential impact of environmental evaluation on improving development efforts. Keywords: Environment; International development; Sustainability; Evaluation; United Nations;

O 267

Lessons learned from evaluations of Biodiversity Conservation Projects in Latin America and networks influence on projects design and effectiveness
C. Vela 1
1

self employed, Quito, Ecuador

The importance of Biodiversity Conservation is indisputable, particularly in Latin American where many countries hold a mega biodiversity and important efforts are done by environmental funds and lending agencies willing to ensure its conservation. Networks of agencies and recipient countries play an important role in project design and in their evaluations because they provide room for interaction and for sharing information of successful experiences for replication. Whereas efficiency could be improved with such networks they also present some constraints. Some projects, which have as primary objective biodiversity conservation, present important design and evaluation limitations. At the same time, evaluations of these interventions are particularly important given that these projects may produce unintended negative effects. The paper presents lessons learned from evaluations made for projects financed by multilateral lending agencies and donor organizations. It is based on midterm and final evaluations of projects, and on some country portfolio evaluations, carried out in different countries of Latin America (from Mexico to Chile). It also presents an analysis of the influence of networks on projects design and their evaluability. All the evaluations involved field visits, direct observations of interventions, interviews with stakeholders (including beneficiaries, authorities and implementing agencies) and desk reviews. In addition, the paper includes actual examples of good and bad practices and proposes recommendations to improve networks for the design and the evaluation of results of projects on conservation. Keywords: Networks; Biodiversity Conservation; Replication;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

182

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S3-15 Strand 3

Paper session

Evaluation data and performance assessment II


S3-15
O 268

Multi-systems governance in european rural development policy: a proposal for self-evaluation by local action groups in LEADER
L. Birolo 1, L. Secco 1, R. Da Re 1, E. Mettepenningen 2, L. Cesaro 3
1 2

Thursday, 4 October, 2012

1 7 : 3 0 1 9 : 3 0

University of Padova, Land Agriculture Environment and Forests, Padova, Italy University of Ghent, Agricultural Economics, Ghent, Belgium 3 National Institute of Agricultural Economics, Rome, Italy

Presenter: Linda Birolo is attending as PhD student at the PhD School Land, Environment, Resources and Health at Padova University in Italy. Her PhD research, which she will defend in spring 2013, is on methods for the evaluation of Community policy for rural development. Field: evaluation of regional, social and development programs and policies. The reform of the Common Agricultural Policy for the period 20142020 is subject to unprecedented tensions and there are no obvious solutions: a general and substantial reduction of public spending requires greater efficiency and legitimacy of actions. In this regard, evaluation of EU rural development policies is of high importance. However, many authors have raised the question whether the commonly provided tools are able to consider all relations and convergences between resources, priorities and objectives. Within the EU multi-institutional context several limitations have been found in the evaluations already conducted, especially at local level. The present work aims at illustrating the potential of self-evaluation processes in addition to the formal procedures, for an accurate assessment at all levels. A self-assessment methodology ranks in an intermediate position between the external evaluation and the delivery of the policy. On the one hand it can allow a localized control during the implementation of the programs in a logic of continuous improvement and on the other hand it can provide decision-makers with comprehensive evidences for the effectiveness of their actions in order to secure funding. In this paper we specifically focus on self-evaluation processes in the context of LEADER. Thereby we consider LEADER as an innovative system of participatory and polycentric governance of rural development policies assuming that the results and impacts implemented through this cross-sectional approach have greater value towards restructuring and strategic sectoral integration in the rural areas. For the propose of appraising this added value we developed a preliminary set of specific indicators based on principles of good governance as a dynamic tool to trigger a process of self-diagnosis conducted by LEADER Local Action Groups (LAGs). The tool is based on a combination of the 8 key features of LEADER (LKFs), derived from EU rules, and the following 7 key dimensions (GKDs) of good governance: sustainable g-local development, efficiency, effectiveness, participation, transparency, accountability, capacity as identified by Secco et al. (2011). Combining the LKFs with the GKDs allows to measure the LKFs with indicators of the GKDs. In order to validate the indicators 2 pilot tests are to be carried out (one in Flemish Region, Belgium, and one in Veneto Region, Italy). By means of indicators refined through a self-assessment process we expect to connect a single performance analysis to a general and comparable model of good governance assessment and provide reliable information for the Common process of monitoring and evaluation. Keywords: Rural development; Leader; Lag; Multi-level governance; Self-assessment;

O 269

Young farmers and Rural Development policy: measuring and explaining success
B. Befani 1
1

Roma, Italy

This study analyzes data about 29 farms whose owners have applied to compete for a National Prize, awarded to successful young farmers benefiting from EU Rural Development funds. Success is defined in two ways: the first takes account of global performance (including financial and environmental), interaction with local institutions and civil society, innovation, plus sustainability and potential for dissemination of good practices; each dimension (along with its subcriteria) is assigned a score interval and a weight, and the final measurement is obtained by the weighted sum of the scores. The second relates to the difference made by EU Rural Development funds on farm activity, where a limited number of impact typologies are discovered, only some of which can be quantified: impact is thus measured using mixed, quali-quantitative techniques. Similar success cases (eg. farms with similar performance or similar impact typology or similar impact magnitude) are then compared on a number of characteristics that are assumed to have contributed to success, by using Qualitative Comparative Analysis (QCA). A number of typical paths emerge for each definition of success, whereby different combinations of conditions are shown to be sufficient for the outcome, but no condition is necessary. These methods for measuring and explaining success are argued to be suitable to cases presenting medium levels of diversity: too complex to be represented by variables only but simple enough to make systematic comparison and synthesis among several cases possible.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

183

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 270

Reflections on conducting effective evaluations for rural development interventions in China


L. P. Luo 1, L. Liu 1
1

China Agriculture University, Department of Humanities and Development Studies, Beijing, China

S3-15

An appropriate evaluation methodology is critical in collecting valid data in complex development intervention contexts. This paper explores an appropriate evaluation methodology for development interventions in rural China, drawing on the experience of an evaluation of a Sino-German cooperative project on sustainable agricultural biodiversity management conducted in Hainan Province, China in 2010. The author proposes that the participatory rural appraisal (PRA) method is appropriate to collect data that are relevant and meaningful in rural China. Moreover, to make evaluations more effective, it is important both for donors and evaluators to understand and consider the local cultural context when evaluation is commissioned, designed and conducted. Co-financed by the European Union via the United Nations Development Program, and implemented by GIZ in cooperation with the Chinese Ministry of Agriculture, the project on sustainable agro-biodiversity management (20041010) aimed at increasing the capacity and awareness of different stakeholders to jointly manage and practice agro-biodiversity in a sustainable way. Agro-biodiversity contributes to food security and livelihood security. Farmers in the project areas in Hainan Province implemented different practices, e.g., undercropping or growing beetlenut under coconut trees. They planted local upland rice and raised local pigs because these local species are more resilient to climate change and also more pleasant to the palate. Conducted in September 2010, the purpose of the evaluation was to assess the impact of the project interventions at both beneficiary and institutional levels. The evaluation methodology included document review, interviews, questionnaires, and the PRA method that enables local farmers to share and analyze their life and conductions. The paper discusses the strengths and weaknesses of the evaluation methods as they were applied to different evaluands. When interacting with farmers, the PRA method turned out to be most effective in capturing the results and impact of the intervention. It is fitting to the rural China context because the PRA method (1) often requires a group of people to participate, and (2) many methods use visual graphs and are easy to understand. As Chinese farmers prefer staying in a group, the PRA method provides an opportunity for them to socialize and interact among themselves; relevant and meaningful information often emerges in discussions, amid joking and laughter. Additionally, as farmers in the group constantly verify one anothers comments, this increases data reliability. The PRA method allows the evaluator to build a collaborative relationship with farmers, and collect a lot of information in a relatively short time. Interviews and questionnaires proved to be effective when evaluation was conducted at the institutional level. Interviews and questionnaires with government officials yielded fruitful results. Hence, the selection of evaluation methods should take into consideration of the special characteristics of evaluands . Lastly, the presentation will compare different ways of thinking between Asians and Westerners, tracing their roots back to the influence of Confucius and Ancient Greeks, respectively; and this would offer some food for thought on the importance of understanding and respecting one anothers culture when evaluation is commissioned, designed and conducted. Keywords: Evaluation; Methods; Context; Rural China;

Thursday, 4 October, 2012

1 7 : 3 0 1 9 : 3 0

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

184

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S5-10 Strand 5

Paper session

Evaluation in an educational Context


S5-10
O 271

The evaluation of the Swiss Development and Cooperation Agencys Vocational Skills Development Activities: A Review
M. Maurer 1
1

University of Zurich, Institute of Education, Zrich, Switzerland

Thursday, 4 October, 2012

1 7 : 3 0 1 9 : 3 0

This paper discusses the findings and recommendations as well as the overall outline of a recent external evaluation of the Swiss Development and Cooperation Agencys (SDC) Vocational Skills Development (VSD) Activities. External evaluations are commissioned by SDCs senior management and are coordinated by the Corporate Controlling Division, which reports directly to SDCs Director General. An important characteristic of external evaluations is that, in order to ensure a critical distance from SDCs work, the evaluation team members have not had significant business relations with SDC. The Vocational Skills Development evaluation portfolio covered 10 projects and programmes in 9 countries (Albania, Bangladesh, Burkina Faso, Ecuador, Mali, Moldova, Nepal, Nicaragua, Peru), four of which were analysed on the basis of fieldwork. For all projects and programmes, documentary analysis (review of credit proposals, previous reports etc.) was combined with interviews of key stakeholders who had been associated with the VSD activities. The fieldwork-based analysis included, in addition, tracer studies as well as interviews with some of those beneficiaries who had been surveyed for the tracer studies. On the basis of this information, SDCs VSD activities were rated as satisfactory. The main strength of the programmes under review, thus the view of the evaluation team, was their strong orientation towards the needs of their respective national and local contexts, with an awareness of labour market realities. Strong labour market-orientation was also the basis for the contribution to higher employment by SDCs VSD activities, as well as for their achievements in the domain of more fundamental changes to VSD systems. The main weakness of activities under review was, however, that target populations were not always being reached, particularly when it came to socio-economically disadvantaged people and females. In a similar vein, evidence from the evaluation showed that many of the activities were not contributing to higher incomes in a significant way. Achieving impact thus remained a particular challenge, if a long-term perspective is adopted. In order to continue to achieve satisfactory results, thus the evaluation recommended, it would be important to focus on the key strengths of SDCs VSD activities, i.e. the strong context orientation and the efforts to involve representatives from the world of work (notably employers and self-employed) in planning and delivery of training. In order to improve performance, however, the evaluation argued that it would be important to increase efforts to constantly and holistically monitor the effects of interventions, not only at the level of individual projects, but also across regions. The presentation also includes a review of the overall outline of the entire evaluation. Thereby, it particularly emphasises a) the need for a clear definition of the evaluation object, b) the importance of generating data already long before the start of the evaluation, c) the importance of the in-house learning process, and d) the necessity to design sector evaluations in a way which goes considerably beyond the DAC-evaluation criteria that are often cited in the context of evaluations in international development cooperation. Keywords: Vocational training; Sector evaluation; International development cooperation; Switzerland; Evaluation of sector policy programmes;

O 272

Teachers Assessment of Monitoring and Evaluation on Sustainable Projects in FCT College of Education Zuba, Abuja, Nigeria
M. U. Okojie 1
1

FCT College Of Education Zuba, Social Studies, Abuja, Nigeria

okojiemon@yahoo.com This study was set out to assess Monitoring and Evaluation on Sustainable Projects in FCT College of Education Zuba, Abuja, Nigeria. The researcher employed the survey research method. The population of the study comprised of 220 teachers. The simple random sampling technique was used to select 25 teachers. A 15 item questionnaire was constructed and used in collecting the needed information from the respondents. Mean score was used to answer the research questions. The analysis of data collected for the research led to the following findings (among others): the Monitoring and Evaluation system is not structured; there is communication gap between the College Management and the Monitoring and Evaluation Desk Officer in the execution of projects; there is no accountability, transparency, commitment and trustworthiness; staff do not have idea of some of the projects executed nor the donors efforts; projects are not sustainable and donors are withdrawing from projects. Based on the findings, the researcher strongly recommended (among others): Monitoring and Evaluation system should be structured to utilize resources effectively; Desk Officer should be involved in the project processes; Monitoring and Evaluation should be structured to increase accountability, transparency, commitment and trustworthiness in the organization, the College Management should organize in-house workshop in collaboration with the donors to create awareness of the projects among the teachers in the College. Keywords: Monitoring; Evaluation; Project; Sustainable projects; College management; Accountability; Transparency; Commitment; Trustworthiness and Donors; Teacher; Zuba; Abuja;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

185

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 273

Purposes and criteria for evaluating the way in which the responsiveness principle is implemented within public organizations. Case-study: Romanian universities

S5-10

I. G. Barbulescu 1, N. Toderas 1, O. A. Ion 1


1

National School of Political and Administrative Studies, International Relations and European Studies, Bucharest, Romania

Thursday, 4 October, 2012

1 7 : 3 0 1 9 : 3 0

The responsiveness principle is one of the key elements for connecting organizations with the economic, social, cultural and political environment in which they activate (Hanne Foss Hansen, 2005). This principle has to be associated with the principles of accountability and public responsibility which frame the institutional arrangements pertaining to the way in which the organizations accomplish their mission and the objectives of their activity. On the other hand, if the responsiveness principle is associated with those of relevance and utility, it can provide explanations regarding the contingency and the congruence of the interventions and activities undertaken by organizations (Guba and Lincoln, 1989). Hence, the evaluation of the way in which organizations apply the responsiveness principle can underpin explanations related to the organizations ability to attain high levels of performance, as well as trust from the beneficiaries in the organization and the services it provides (Melvin M. Mark and Gary T. Henry, 2004). Although in the Western European states the debate about the responsiveness principle has been both dynamic and fruitful, in the former communist states it has not been exhausted yet. For example, in Romania the responsiveness principle is rather limitedly applied, whilst the evaluations undertaken do not fully take into consideration the highlighting of the degree of organizational responsiveness. This principle is mainly present in the private environment (multinational corporations, charities, NGOs) and very little within public governmental organizations. Nevertheless, in Romania there are some higher education institutions that have developed sets of instruments and practices pertaining to the responsiveness principle. Consequently, these practices have contributed to strengthening the universities in their frame of reference and enabled them at the same time to ensure a propitious environment for attaining organizational performance. The legal framework for the functioning of higher education institutions which has been applied since 2011 refers only vaguely to the responsiveness principle, but the process of classifying universities and ranking study programmes undertaken in 2011 as well took into account the evaluation of those same institutions responsiveness (Barbulescu, Ion, Iancu, Todera?, 2011). The finality of this exercise was identified as changing the policy of financing study programmes from the state budget. Our paper aims at analysing the way in which public organizations in Romania apply the responsiveness principle. In order to do that, firstly we will provide a descriptive account of how the principle is applied. Secondly, we will analyse the way in which the Romanian Agency for Quality Assurance in Higher Education and other similar organizations envisage evaluating the universities degree of responsiveness in various organizational contexts. As a practical result of our analysis, we aim at proposing an evaluation framework for the way in which the responsiveness principle is implemented in Romanian higher education institutions. This framework could be further used by the Romanian Agency for Quality Assurance in Higher Education in order to devise criteria for incentivizing those institutions which develop programmes and actions illustrating the responsiveness principle. Keywords: Responsiveness principle; Higher education; Quality assurance; University evaluation; Accountability principle;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

186

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S3-03 Strand 3

Paper session

Gender-sensitive policies, human rights S3-03 and development evaluation


O 274

The Gender Policies of the African Union and the Regional Economic Communities: a need for better integration.
Thursday, 4 October, 2012
1 7 : 3 0 1 9 : 3 0
O. Oyinloye 1
1

Africa Governance Institute, Dakar, Senegal

At the core of the African Union is the belief African integration, the embodiment of the unity, solidarity and shared values among member States, is the Unions greatest tool with which to address the Africas most pressing developmental challenges. To this end, the continents Regional Economic Communities (RECs) serve as the channels through which this belief, along with the AUs other ideals and values can be popularized, concretized and accepted by regional member States. One of these ideals is the enabling of a more inclusive socio-economic development, one that promotes diversity in the social, economic, cultural and political areas, among others, of the continents development and favours the inclusion of all able members in a populations development process. In Africa, women are underrepresented in virtually all these areas, and the relationship between this and the continents underdevelopment has been alluded to overtime. One response to this underrepresentation has been the mainstreaming of gender in all organizational plans and processes, an effort which resulted in the creation of not only the AU Gender Policy, but also gender policies and initiatives by some of the RECs themselves. However, downstream implementation of these policies suffers many roadblocks, some of which include weak leadership: a lack of accountability and incentive systems; insufficient human and financial resources; and the trail-off effect (waning interest after an initial, high-interest period). Given that most of the policies have not yet been implemented or are at most, in their very early stages, this paper will use a combination of a desk-based review of available documents and literature; primary interviews with key policy-makers from the AU and the RECs; and primary interviews with gender practitioners and researchers to examine these challenges, as pertaining to the gender policies (and gendered policies) of the AU and the RECs. In so doing, it will propose how these policies can be better coordinated, how resources can be better pooled, and how information can be better shared in order to achieve greater effectiveness and stronger long term outcomes for the development of African women and by extension, her peoples. Keywords: Gender; Governance; Public Policy; African Union; Regional Economic Communities (Recs);

O 275

Human Rights and Gender Equality approaches in evaluation. Steps to integrate both approaches in development evaluation
J. Espinosa 1, J. A. Ligero 2, S. Franco 3
1 2 3

University of Seville, Seville, Spain Complutense University Madrid, Centro Superior de Estudios de Gestin, Madrid, Spain State Secretariat for International Cooperation and Latin America, Madrid, Spain

Several institutions and authors are pointing out the importance of introducing the human rights-based approach (HRBA) and gender and development (GAD) approach in development actions, among other reasons, because gender equality and respect for human rights are regarded as necessary conditions in the process of human development. To the extent that interventions are planned under HRB and gender approaches, the evaluations must also do so. To evaluate with GAD and HRB approaches, involves being able to discern, understand and assess whether the intervention promotes or protects both human rights and gender equality. There may be serious and rigorous evaluations that are blind, for instance, to gender inequalities that a program may provoke. In this respect, evaluations should definite and consciously follow an approach in which gender systems and human rights are taken into account, as an evaluation by itself does not ensure this sensitiveness. However, gender and HR sensitive evaluations are not easy. Not only there is a diversity of methodological approaches, but also there are different understandings and minimal agreement among different actors and the trajectories of the HRB and GAD approaches do not always run parallel. The result is that for following the advice of national and international organizations for evaluating with HRA and GAD approaches, theoretical and methodological efforts should be done without the certainty that the option chosen is appropriate. Moreover, there are very few sensitive evaluations available. To encourage sensitive evaluations and improve their quality, the Spanish Ministry of Foreign Affairs and Cooperation has promoted a process of evaluation research on how to incorporate the HRBA and gender perspective. On the one hand, there has been a systematization of the literature on cooperation, evaluation, gender and HRBA, and on the other hand, it has been produced an overview of key aspects to ensure that evaluations are truly sensitive. Among the key elements that have been identified, three of them appeared as the most important ones: 1) the awareness of the commissioner institution, 2) trained in gender and HR sensitiveness evaluators, and 3) the selection of a specific methodological approach to gender or HRBA. We have clustered the various methodological choices in four groups: Theory of change approaches to evaluation. Stakeholder-oriented and democratic evaluation approaches. Critical change and transformation paradigm oriented evaluations. Criteria oriented evaluations.
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

187

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

The final outcome is a sequence of evaluation phases, in which those that are essential for the evaluation to be gender-sensitive and human rights-oriented are identified. Describing this sequence is the purpose of our paper. Keywords: Human rights-based approach; Gender and development approach; Development evaluation;

S3-03

O 276

A rising cacophony or accord in emerging approaches to gender & evaluation?


F. Etta 1
1

African Evaluation Association, Lagos, Nigeria

This paper is in response to an invitation by the newly created Topical Working Group (TWG) on Gender and Evaluation of the EES.

Thursday, 4 October, 2012

1 7 : 3 0 1 9 : 3 0

Vienna Declaration On 25 June 1993, at the World Conference on Human Rights representatives of 171 States adopted by consensus the and Programme of Action. Following this the UN system and the world in general have been searching for ways to make good the affirmation that Human rights and fundamental freedoms are the birthright of all human beings; their protection and promotion is the first responsibility of Governments. Since then the common plan for the strengthening of human rights work around the world has inspired and driven communities to action. When in 1997 the UN Reform Programme was launched, the call by the Secretary-General for the UN system to mainstream human rights into their various activities and programmes within the framework of their respective mandates was a major dimension of the effort as a means to achieve a concerted system-wide approach to human rights. Globally, among the major outcomes of the 1993 conference was the rising rhetoric and actions in support of human rights, the creation of programmes, initiatives and institutions to research, support, monitor, and/or protect the various human rights. For three decades before this critical conference Amnesty International style research and action was the modus operandi for grave abuses of human rights. In the 1990s and following the conference the Vienna Declaration, became a natural vehicle to highlight the new visions of human rights thinking and practice being developed by women. It became the unifying public focus of a worldwide Global Campaign for Womens Human Rights-a broad and loose international collaborative effort to advance womens human rights (Bunch. C & Frost S. 2000). From these early modest beginnings, much has developed. Gender mainstreaming has led to the development of gender-sensitive methodologies; rights based approaches to development and a multitude of tools and techniques. The Topical Working Group (TWG) conceived as an exchange platform in the service of knowledge production and the promotion of its exchange to support and ensure the integration the gender dimensions in evaluation is both timely and prudent. This paper will review the prominent contemporary approaches and efforts at the nexus of human rights, womens rights and development evaluation in an effort to interrogate their coherence and/or harmony. There are currently two broad strands of; human rights based approaches and equity focused approaches. Since 2006, the African Gender & Development Evaluation Network (AGDEN) has been engaged in researching and developing an approach which is located at this nexus and uses a human rights framework embedded within feminist philosophy. This paper will expound this AGDEN approach while attempting a cartography that distinguishes between them and similar others such as feminist and gender-sensitive evaluations, and evaluation from a gender perspective perceptible. References Bunch C. & Samantha Frost, (2000), International Encyclopedia of Women: Global Womens Issues and Knowledge, Routledge http://www.ohchr.org/EN/AboutUs/Pages/ViennaWC.aspx Keywords: Gender; Human; Rights; Development; Evaluation;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

188

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S5-15 Strand 5

Paper session

The interaction of evaluation, research S5-15 and innovation II


O 277

Theory-based evaluation and the challenges of evaluating the additionality of EU-support programmes for innovation
Thursday, 4 October, 2012
1 7 : 3 0 1 9 : 3 0
P. Padilla 1, G. Steurs 1
1

IDEA Consult, Competitiveness and Innovation, Brussels, Belgium

This paper will present the results of a research project with the aim to design a new approach for evaluating the additionality of European programmes. Evaluation and public policy analysis literatures acknowledge the increasing demand for accountability, stakeholders participation, learning, and stronger/grounded evaluation results. While all policy fields do not have the same evaluation cultures, those trends seem to be general and translated into demands for renewed evaluation methods and approaches. The American experience showed how the accountability issue has been a motor for the development of the evaluation practice in the United States (through the creation of the General Accountability Office for instance). Also European policies are facing growing pressures for more transparency, and more attention is now paid to the evaluation of European policies making the difference. The principle of subsidiarity and the notion of European additionality are key topics in this regard; in such a context, evaluators are confronted with the challenge of analyzing the real added value of the European public actions, which are implemented in a context with regional and national systems and policies in place. Mixed-methods approaches and the conciliation of qualitative and quantitative tools led to the first Theory-Based approaches elaborated in the 1970s. Through the Realistic evaluation of Pawson and Tilley to John Maynes contribution analysis, theory-based approaches proved to be breakthrough. Starting from the theory of the programme, Theory-Based Evaluations presented a number of advantages like the understanding of the causal relationships leading to the observed outcomes and impacts. It also allowed evaluators to take into account the influence of external factors and interactions between the observed policy and other socio-economic phenomena No theory-based evaluation corresponding to C. Weiss definition has been performed by DG Enterprise and Industry or DG Research and Technology Development so far, while some academics argue that theory-driven approaches could bring an important added value to innovation policy evaluation. Our research seeks to develop a theory-based evaluation approach for assessing the additionality of European innovation support programmes that are embedded in complex systems. The project involves a team of researchers completed by two academic experts. Relying upon a qualitative approach (literature review, interviews, experts panels), the research proposes to explore new ways for isolating the additionality of EU public policies and programmes in the field of innovation policy. The aim is to develop and strengthen the theoretical basis for analyzing the additionality of European support to innovation, and public policies to a broader extent. A number of topics will be addressed, like protocol issues, bias limitation, as well as broader methodological and theoretical elements to improve the so-called TBE approach. We argue that evaluators need to make the distinction between the theory that supports the public intervention in the area of innovation and the theory that European action adds to national and/or regional actions. We further come up with directions on how the latter theory of change could look like and how it could be evaluated in the context of specific cases. Keywords: Additionality; Innovation; European; Theory-based; Methods;

O 278

The Innovation Barometer Project a systematic approach to evaluating the pro-innovative public programmes in Poland
J. Pokorski 1
1

Polish Agency for Enterprise Development, Enterprise and Innovation Department, Warsaw, Poland

The innovativeness enhancement, seen as a great chance of building sustainable competitive advantages on the Single European Market and global markets, has become one of the Polish development priorities for the following years. Reflection of this strategic goal is the ongoing Innovative Economy OP, 20072013. It is the most extensive and complex instrument supporting innovativeness of the economy that has ever been carried out as a part of the Cohesion Policy in EU (total budget of more than 10 bln EUR). The main goals of the Programme are to be achieved throughout a wide range of complex and often innovative therefore experimental and risky instruments addressed to Polish enterprises and business environment. PARP is the key institution in Polish system of innovativeness support. It is above all responsible for stimulating innovativeness among enterprises and pro-innovational activity of the business environment. In order to meet the development challenges successfully and enhance innovativeness effectively, it is crucial to carefully evaluate the relevancy of tools that were used. It should enable getting to know the mechanisms of innovativeness enhancement and assess achieved effects on the ongoing basis. PARP, an institution that has broad experience in carrying out programmes co-financed by EU funds is also a key actor in the field of implementing evidence-based policy in Poland. Ever since the pre-accession programmes the Agency has been systematically evaluating its programmes and using the findings to improve forms of support to enterprises. In Autumn 2011 PARP launched unprecedented (methodologically as well as in range of its objects) in Cohesion Policy evaluation project called Innovativeness Barometer The Project is multidimensional and cyclic evaluation study that enables to measure and asses on-going results of the Programme (IEOP) in the field of innovativeness. This evaluation study is based on research tools and indicators developed by PARP and expert working group from 2008 to 2010. The tools are based on web technology and involve innovational approach to the study of the efficiency of support (e.g. tracking research, conjoint and propensity score matching techniques). They also enable comparativeness of evaluation indicators and statistic data (from the GUS Central Statistical Office of Poland and EUROSTAT) concerning economical condition and innovativeness of enterprises.
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

189

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

In the future Innovativeness Barometer (20112015) is intended to measure net effects in certain areas of IEOP throughout analysis of counterfactual referring to selected control groups. Similar approach has also been used in PARPs ex post evaluation of 20042006 EU programmes as well as evaluation of Phare Projects. The findings of Innovativeness Barometer provide systematic information concerning effectiveness of certain pro-innovational measures of the Programme, they support decision making process in the implementation system, indicating productivity of certain aid instruments.

S5-15

This paper presents systematic approach to evaluating the pro-innovative public programmes in Poland, starting from programmes theory, defining outcomes indicators, designing tools for research to multi-evaluation scheme implementation with net effect measurement in certain IEOP Measures. This evaluation approach could also be successfully implemented in other countries, especially in the EU, carrying out pro-innovative support programmes. Keywords: Innovation; On-going; Evidence-based; Indicators; Tracking;

O 279

Thursday, 4 October, 2012

1 7 : 3 0 1 9 : 3 0

Governance of Regional Innovation Systems and Innovative Clusters International peer reviews as tool for learning and strategic development
P. Kempinsky 1
1

Kontigo AB, Stockholm, Sweden

Retaining competitiveness in an increasingly knowledge-based economy requires constant renewal and innovation. Growth is created by innovations based on peak expertise, exchange of knowledge and mutual learning, as well as the impact of triple helix constellations. The ability to innovate is a decisive factor for a countrys or regions economic growth and prosperity. Systematic efforts has been made in Sweden nationally and regionally to do develop innovation system and innovative clusters to strengthen the innovative power. By innovation system, we mean actors within research, business and politics/the public sector who together generate, exchange and use new technology and new knowledge in order to create sustainable growth through new products, services and processes. Governance is a key issue for the development of innovation systems and innovative clusters as this must be done in multi layer, multi actor and multi function context. Issues concerning strategic learning and development is important to support the governance process for the developmemt of the innovation system and innovative clusters. As part of the governance process international peer reviews has been used widely in Sweden as an evaluation tool to support learning and strategic development. The paper and presentation will discuss peer reviews as a method to support learning and strategic development in complex governance processes, such as in innovation systems and innovative clusters. Based on experiences from Sweden the paper and presentation will discuss how peer reviews can be developed as a part of the learning system supporting governance and the development of innovation systems and innovative clusters. Peter Kempinsky has during the last five years organised and carried out about 20 international peer reviews of innovation systems and innovative clusters in Sweden for regions, clusters and Vinnova (Sweden innovation agency).

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

190

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S3-13 Strand 3

Paper session

Evaluation credibility and learning II


S3-13
O 281

The challenge of evaluating horizontal objectives in structural fund programmes


G. Hallin 1, M. Jonas 2
1 2

KONTIGO AB, Stockholm, Sweden Linnaeus University, School of business and economics, Vaxjo, Sweden

Thursday, 4 October, 2012

1 7 : 3 0 1 9 : 3 0

All since the reform of the European Union structural funds programmes in the mid 1990:s so called horizontal integration of objectives such as gender equality and sustainable environmental development has been an increasingly important aspect of the implementation of programmes. Millions of Euros are spent on the integration of horizontal objectives arguing that they will lead to sustainable regional and social development. Despite this there is very little empirical evidence of the impact of the integration of horizontal aspects into the programmes. The very fact that such objectives are integrated into programmes which primarily have other objectives forms (as is demonstrated in this paper) a major challenge for evaluation. In this paper we firstly demonstrate why horizontal integration is an evaluation challenge. Three main reasons for this are identified, and categorised: i) the challenge of multiple-objectives; ii) the challenge of complex target-groups; and iii) the challenge of non-prioritized objective. In statistical terms this transforms into; questionable casual relations and expected small effect sizes. All these aspects of integration of horizontal objectives into the programmes make evaluation a serious challenge. The statistical part of the problem thus includes that to measure small effect sizes the evaluators needs to develop sharp tools, e.g. collect data rather than using register data, and require relatively large number of observations. The cost of evaluating the horizontal integration objectives is therefore high compared to measure the impacts of the main objectives. We argue that there may be three principle ways to resolve the dilemma: firstly to abandon the idea of horizontal integration as an objective rather than as a precondition; and secondly to prioritize different objectives in an objective hierarchy, and thirdly, to define programmes better in order to arrive at bigger and more measureable expected outcomes. The three ways are related and poses challenge to the main-streaming doctrine dominating parts of the debate over horizontal objectives. Furthermore, the whole issue of horizontal objectives, will need to be seriously addressed if the European Commission proposal to stress the importance of measureable output is to be realized in the next programming period of the European union structural funds. Keywords: Horizontal objectives; Structural funds; Evaluation challenge;

O 282

When Will We Ever Learn From Each Other? Structural Impediments to Evaluations Cumulative Contribution in a Developmental State
T. Beney 1
1

Feedback Research & Analytics, South Africa

Based on the results of three evaluation studies conducted in South Africa, this paper posits that the potential value of accumulated scientific knowledge to the achievement of developmental goals will remain unrealized as a result of specific structural factors: technological asymmetries; skills deficits; institutional misalignment; unintegrated knowledge management; research discontinuity; and fragmented communities of practice. Each of these constructs is defined and illustrated based on the South African case. The evaluation recommendations and subsequent efforts to address these structural constraints are also described. However the contribution the paper attempts to make is in distilling from the evaluation exemplars a framework for analyzing progress in accumulating and applying scientific knowledge, derived from evaluative practice, in public service delivery in the context of a developmental state. Keywords: Knowledge accumulation; Structural impediments; Developmental state;

O 333

Economic analysis of public programs: How can we make them more informative?
E. M. Foster 1, 2, 3
1 2 3

Department of Health Care Organization and Policy Department of Biostatistics, School of Public Health The University of Alabama at Birmingham

Programs evaluations increasingly include an economic component, such as a cost-effectiveness analysis. The results of these studies can be quite influential with policy makers, and existing studies examine a broad range of policies and programs, such as early childhood education and job training program. However, the methods used in these studies reflect a set of assumptions grounded in their use in medical settings. These assumptions involve the role of program scale; of startup costs; of the mutual exclusivity of treatment alternatives and the independence of the returns to these programs; among others. These assumptions typically do not fit the decision policy makers need to make in allocating public funds. This presentation presents a set of recommendations designed to make economic analysis better match the needs of policy makers. This work draws on the recent literature in health economics involving portfolios of health investments and budget impact analysis.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

191

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S1-23 Strand 1

Panel

Information and communication technology S1-23 for development


O 283

Information and communication technology for development


C. Heider 1, R. Kirkpatrick 2
1 2

World Food Programme, Office of Evaluation, Roma, Italy UNITED NATIONS, GLOBAL PULSE, New York, USA

Friday, 5 October, 2012

9 : 3 0 1 1 : 0 0

The premise underlying the ICT4D High Level Panel is that the future of evaluation will be propelled by the new information and communications technologies. It will be shaped by the social energies triggered by Web 2.0 and the analytical potential of big data associated with Web 3.0. Web 2.0 has already begun to wire evaluators, program managers and ultimate beneficiaries closer together through social networks. The next phase will harness sophisticated Web 3.0 analytics to improve the results orientation of development evaluation. It will do so by making sense out of the torrent of digital data constantly being created through millions of sensors embedded in mobile phones, ATM machines, personal computers, pads and tablets, transport vehicles and industrial machines. The ICT4D High Level Panel will draw the implications of the networked society for the future of evaluation in the zones of turmoil and transition of the developing world. It will be co-chaired by the Finnish Minister of Development and the Vice President for Strategy at the Rockefeller Foundation. The panel will include Yemi Adamolekun a technology-for-development activist from Nigeria; Caroline Heider, Director General of the Independent Evaluation Group at the World Bank; Robert Kirkpatrick, Director of the United Nations Global Pulse Initiative and Patrick Meier an internationally recognized thought leader on the application of new technologies for crisis prevention, human rights and civil resistance from Kenya Keywords: Technology For Development;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

192

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S1-21 Strand 1

Panel

Networking in development evaluation Experiences S1-21 from DAC, ECG, UNEG


O 284

Networking in development evaluation Experiences from DAC, ECG, UNEG


H. E. Lundgren 1, M. Pennington 2, I. Yong-Protzel 3, B. Sanz 4

Friday, 5 October, 2012

9 : 3 0 1 1 : 0 0

1 2

OECD DAC Evaluation Network 3 Evaluation Co-operation Group 4 United Nations Evaluation Group

This session will discuss experiences in networking in development evaluation with the aim of shedding light on the role of networks in strengthening evaluation capacities, advancing knowledge and supporting collaboration. The panel brings together representatives from three major networks in development evaluation: The Evaluation Co-operation Group (international financial institutions), UNEG (the United Nations Evaluation Group) and the DAC Evaluation Network (bilateral and some multilateral development agencies). The origins and aims of the three networks have some similarities and differences. All three aim to support accountability and learning by strengthening development evaluation, harmonising approaches and encouraging good practice. The DAC Network is the oldest, created 30 years ago in the context of a political debate on whether development co-operation was delivering the expected results leading to the creation of a network for strengthening evaluation capacities and collaboration. The ECG was similarly created in the 1990s in the context of a discussion on the performance of the multilateral development banks, and a request to increase co-operation and harmonisation between evaluation departments of the multilateral development banks. The UNEG was created in 2004 with the purpose to strengthen collaboration between some 40 UN agencies. It was building on a UN interagency working group and at the time, a discussion was taking place on UN reform and the role of evaluation in the overall UN system. The overall international development architecture is complex and evolving, with over 200 multilateral, global and regional programmes, and many providers of bilateral assistance and/or south-south co-operation. None of the current networks have been set up to have a fully global membership but rather to meet specific needs of their respective memberships, although their work involves and reaches out other development partners. While the OECD/DAC is essentially an intergovernmental network, the ECG and UNEG are co-operation mechanisms between international institutions. The panel will discuss: What do these networks do? What do they contribute? To whom are they useful? What have been the effects and impact of these networks on professionalisation, cross-fertilization and what influence have they had on public policies of their constituencies? What are the challenges experienced and what may be the opportunities looking ahead? Are there lessons from these networks that could be useful to the growing number of networks and associations in developing countries and other types of networks? Could the experience with NONIE (the Network of networks on impact evaluation) where the three networks collaborated to set up a joint platform for discussing impact evaluation with partners be useful to consider for possible future coalitions on broader evaluation issues, methods and practices? Insights and concrete examples will be provided by the panel members followed by an interactive discussion.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

193

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S5-18 Strand 5

Panel

Jordans Evaluation and Impact Assessment Unit: S5-18 lessons in evaluation capacity building
O 285

Jordans Evaluation and Impact Assessment Unit: Lessons on successful organisational capacity building for evaluation
I. Davies 1, L. Al-Zoubi 2, R. Qudisat 3

Friday, 5 October, 2012

9 : 3 0 1 1 : 0 0

1 2

ICDC Inc, PARIS, France Ministry of Planning and International Cooperation of Jordan, Evaluation and Impact Assessment Unit, Amman, Jordan 3 Ministry of Social Development of Jordan, Monitoring and Evaluation, Amman, Jordan

Names and concise bios of presenters: Ian C. Davies (Chair and contact person) Ian Davies provides consulting services in corporate governance, finance, management and accountability, to executive and political levels of governments, and to boards and executives of public and private bilateral and multilateral organisations in developing, transition and developed economies. He holds a post-graduate degree in public administration in management and evaluation. Eng. Lamia S. Al-Zoubi Eng. Lamia Al-Zoubi is currently the Director of the Evaluation and Impact Assessment Unit at the Ministry of Planning and International Cooperation of Jordan. She holds a degree in Architecture and has received professional training in Impact Evaluation, Results based Monitoring and Evaluation, Strategic Planning and Advocacy on Capacity Assessment and Development Strategies from recognised institutions. She has extensive experience in project management and strategic planning with a focus on evaluation and impact Assessment. Rasha Qudisat Rasha Qudisat is currently working as a consultant for the Minister of Social Development and expert for Monitoring and Evaluation. She has a MSc. degree in Environmental Engineering with extensive experience in performance management, performance indicators, building new M&E systems, planning and design of integrated social services programs, and capacity building and development in these fields. Through working in MoSD, she leads the process of development of standards for the provision of social services through consultation with various layers within MoSD and donors community. Rationale: Evaluation capacity building in development cooperation usually consists of training in evaluation. As such, individual professional knowledge and skills may be developed however it is organisational capacity in evaluation that is required for use and sustainability. This session informs the development of a theoretical frame of reference and practical guidelines for evaluation capacity building in organisations. Objectives: The objectives of this panel session are to: share with participants the experience of the Government of Jordan in setting up and developing its Evaluation and Impact Assessment Unit explain the theoretical and cultural considerations behind what was done differently from standard technical cooperation in evaluation capacity building show how and why capacity was successfully developed (and continues to be) Narrative and Justification: The panel will present three different and complementary perspectives on the Jordanian experience: the evaluator, the client, the external advisor. Each panel member will highlight aspects of the capacity building process that were most useful from each ones vantage point, explain why compared to usual approaches, identify the benefits and draw out the lessons. After the short presentations and interactive discussion will take place with participants. This session should be relevant and useful to evaluators and key stakeholders involved in and interested by successful approaches to evaluation capacity building in organisations, and particularly in public sector organisations and government departments in transition and developing economies. Keywords: Organisational evaluation capacity building; Coaching; Management;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

194

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S5-12 Strand 5

Paper session

Evaluation in the Health Care Sector


S5-12
O 286

Can evaluations bring sustainable changes in ministries: Learnings from the Health Impact Assessment (HIA) strategy of Quebec.
P. Smits 1, J. L. Denis 1, M. F. Duranceau 1, L. Jobin 2, C. Druet 2
1 2

ENAP, Montral (Qubec), Canada Minist?re de la Sant et des Services sociaux Qubec, Qubec, Canada

Friday, 5 October, 2012

9 : 3 0 1 1 : 0 0

Context: Health Impact Assessment (HIA)/valuation dimpact en sant (EIS) is an evaluation procedure to ensure that all levels of government consider the potential impact of their decisions on the health and well-being of the population. While some countries implemented HIA/EIS without legislative support, the government in Quebec included the Artcile 54 in the law on public health, making it a requirement to consider health aspects. Ten years after the introduciton of Article 54, we conducted an evaluation on the impact that HIA/EIS, and other health oriented practices of the Quebec government have on the sustainability of the practices inside ministries. Methodology: We used multiple cases studies. We collected strategic documentation from ministries : strategic plans, management report, topic specific ministerial action plan, report on budget and expenses. Ministries whose main orientations are social, health or economy were investigated. The analysis used a grid of analysis based on the mention of, the width of consideration and the importance of health and health related aspects and of HIA/EIS. Results will highlight how ministries incorporated (and not) the logic brought by HIA/EIS into their strategic considerations, whether as orientations, targets, indicators,etc. and how it changed over time. The findings will also emphasize the importance of the coordination unit and network in the development of HIA/EIS and in reinforcing the consideration of evaluation, especially HIA/EIS, in sustainable practices. Keywords: Health impact evaluation; Network; Public policy; Documentary analysis;

O 287

Sector Monitoring and Evaluation Systems: a comparison between the monitoring and evaluation systems in the health sectors of Rwanda and Uganda.
L. Inberg 1, N. Holvoet 1
1

Institute of Development Policy and Management, University of Antwerp, Antwerp, Belgium

Nathalie Holvoet holds a Ph.D in economics and is a senior lecturer at the Institute of Development Policy and Management of the University of Antwerp. Liesbeth Inberg studied human geography and advanced development studies and is a researcher at the same institute. One of the five principles in the aid reform agenda set for donors and recipients in the 2005 Paris Declaration is the management for results principle. While progress in the implementation of reforms in this area had been slow at start, the recent 2011 Paris Declaration Monitoring Survey shows considerable improvements in the management for results principle: 21 % (15 out of 76) of the countries participating in the 2011 survey have results-oriented frameworks that are deemed adequate, compared to 6 % (3 out of 54) in the 2008 survey. Despite this progress, the target of 36 % of the countries having a result-oriented framework in 2000 is not met. While most countries do have a number of M&E activities and arrangements in place (especially at sector level), there is often a lack of coordination between different components of a system. Having a properly functioning nationally owned M&E system is crucial for the use of information for decision-making and results delivery towards development goals. Notwithstanding the importance of M&E for accountability and evidence-based policy-making, strengthening of countrys M&E systems has long remained a largely neglected issue in partner countries and among development partners. Prior to the development or upgrading of an M&E system, it is important to assess the quality of existing systems or arrangements, taking into account both the M&E supply and demand side as well as possible networking among actors of M&E demand and supply. As a harmonised M&E diagnostic instrument does not exist so far, we elaborated a checklist to diagnose, monitor and evaluate the quality of sector M&E systems. In order to counter the criticism that M&E is often narrowed down to a focus on technicalities, our checklist broadens the spectrum and gives a broad overview of the quality of M&E systems alongside six dimensions, including i) policy, ii) indicators, data collection and methodology, iii) organisation (split into iiia: structure, and iiib: linkages), iv) capacity, v) participation of actors outside government and vi) use of M&E outputs for accountability and learning purposes. On the basis of the checklist, we compare in this paper the M&E systems in the health sectors of Rwanda and Uganda. The stocktaking in Rwanda and Uganda draws upon a combination of secondary and primary data collection on the ground and combines quantitative with qualitative assessment. Our findings hint at a number of similarities but also considerable differences in the way monitoring and evaluation is organised, the degree of participation of non-government actors and the use of M&E for objectives of accountability and learning. Keywords: M&E systems; Paris Declaration; Health sector; Uganda; Rwanda;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

195

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 288

Evaluation of Social Sector Schemes in Government: Lessons to be learnt from National Rural Health Mission Scheme in India
S. Kandamuthan 1
1

Administrative Staff College of India, Centre for Human Development, Hyderabad Andhra Pradesh, India

S5-12

Introduction: India although committed to providing health care for its people, has one of the lowest per capita public expenditures on health. During 2005, the government spent just 0.9% of the GDP on health care. The patients were hugely dependent on the private sector and the out of pocket expenditure was quite substantial. This placed a large burden on the households, especially the poor and was forced to borrow or sell their assets to meet medical expenses. The National Rural Health Mission (NRHM) initiated in 2005 and rolled out in 2007 for a period of five years till 2012 was an initiative by the government to provide effective healthcare to rural population throughout the country with special focus on 18 states, which have weak public health indicators and/or weak infrastructure. Currently all the states in India have started new programmes and schemes under the aegis of National Rural Health Mission and the total outlay of the scheme is around four billion in a year. Objective: This paper looks in detail the various evaluations of NRHM undertaken in the last five years by the Government and the lessons to be learnt from such evaluations. The paper would highlight the major issues in undertaking evaluations like in Concurrent evaluation of NRHM, Common Review Mission by Ministry of Health, Evaluation of NRHM undertaken by Planning Commission, evaluation studies undertaken by National Health System Resource Centre and individual studies undertaken on behalf of the government. Methods: The study reviews the various evaluation studies undertaken by the Government. For the Concurrent evaluation of National Rural Health Mission, the process of the study undertaken in the state of Andhra Pradesh for about 7200 households and more than 100 health facilities was analyzed in detail. Similarly the impact of the other evaluations of NRHM was looked into based on secondary data analysis and key informant interviews with officials in government. Results and Conclusion: It was found that most of the evaluations undertaken have not been in a systematic manner especially when billions of rupees are spent annually. Most of the evaluations of the scheme are discrete events. Along with a review of various evaluations conducted, the paper highlighted the issues in the process of conducting evaluations, its efficacy in influencing policy and whether its really helped in future program implementation. It was found from the study that there was lack of expertise in planning and designing evaluations leading to delays in implementation of suggestions from evaluation results. There is also no proper dissemination of evaluation results for most of the evaluations undertaken. The paper also provides suggestions on how ideally such evaluations should have been undertaken so that it could be used for future evaluations of other social sector schemes undertaken in other states in India and even in other developing countries, which have similar social sector programmes. There is also definitely a need for more capacity building in evaluation in the government so that schemes are evaluated properly in time and effectively. Keywords: Evaluation in Government; Social Sector; Evaluation Results; Developing Countries; Capacity Building;

Friday, 5 October, 2012

9 : 3 0 1 1 : 0 0

O 289

Lack of Evaluation Criteria within Health Management in Kenya


G. Nyabade 1
1

Go Fishnet Youth Project, Kisumu, Kenya

Background: The urgent need for defined criteria for monitoring and evaluation programs in Kenyas government sectors especially within the management system is an empathic situation worth urgent consideration and initiatives. The District Health Management Information Systems (DHMISs) were established by the Ministry of Health (MoH) in Kenya more than two decades ago. Since then, no comprehensive evaluation has been undertaken. Objective: To propose evaluation criteria for assessing the design, implementation and impact of DHMIS in the management of the District Health System (DHS) in Kenya. Methods: A descriptive cross-sectional study conducted in three DHSs in Kenya: Kisumu, Homa Bay and Uasin Gishu districts. Data was collected through focus group discussions, key informant interviews, and documents review. The respondents, purposely selected from the Ministry of Health headquarters and the three DHS districts, included designers and CEOs of NGOs working with M and E. Results: A set of evaluation criteria for DHMISs was identified for each of the three phases of implementation: pre-implementation evaluation criteria (categorised as policy and objectives, technical feasibility, financial viability, political viability and administrative operability) to be applied at the design stage; concurrent implementation evaluation criteria to be applied during implementation of the new system; and post-implementation evaluation criteria (classified as internal-quality of information; external-resources and managerial support; ultimate-systems impact) to be applied after implementation of the system for at least three years. Conclusions: In designing a DHMIS model there is need to have built-in these three sets of evaluation criteria which should be used in a phased manner. Pre-implementation evaluation criteria should be used to evaluate the systems viability before more resources are committed to it; concurrent (operational) implementation evaluation criteria should be used to monitor the process; and post-implementation evaluation criteria should be applied to assess the systems effectiveness.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

196

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S4-30 Strand 4

Panel

Evaluating the Paris Declaration on aid effectiveness


S4-30
O 290

Evaluating the Paris Declaration on aid effectiveness


T. Kliest 1, N. Dabelstein 2
1 2

Ministry of Foreign Affairs, Policy and Operations Evaluation Department, Den Haag, Netherlands Secretariat of the Evaluation of the Paris Declaration, Copenhagen, Denmark

Friday, 5 October, 2012

9 : 3 0 1 1 : 0 0

The Paris Declaration on Aid Effectiveness, endorsed in March 2005, is an international agreement signed by over one hundred Ministers, Heads of Agencies and other Senior Officials. The Declaration lays down an action-orientated roadmap intended to improve the quality of aid and its impact on development. An independent cross-country evaluation of the Paris Declaration commissioned and overseen by an international Reference Group was initiated in 2007 and completed in 2011. The evaluation consists of 40 separate, but coordinated evaluations in Afghanistan, Bangladesh, Benin, Bolivia, Cambodia, Cameroon, Colombia, Cook Islands, Ghana, Indonesia, Malawi, Mali, Mozambique, Nepal, Philippines, Samoa, Senegal, South Africa, Sri Lanka, Uganda, Vietnam, Zambia, Austria, Australia, Denmark, Finland, France, Germany, Ireland, Japan, Luxemburg, Netherlands, New Zealand, Spain, Sweden, UK, US, AfDB, AsDB and UNDG A Synthesis report of the first phase was published in June 2008, and used in the preparatiions of the High Level Forum on Aid Effectiveness which resulted in the Accra Agenda for Action. The second phase of the evaluation was published in May 2011. The report was used in the preparations for the 4th High Level on Development Effectiveness held in November 2011 in Busan, Korea. The evaluation findings featured prominently in the discussions at Busan. This is one of the hitherto largest joint evaluations undertaken applying a unique decentralised approach The Panel will present and discuss the organisational and methodological lessons learned by different stakeholders in the evaluation. This evaluation is unique in several respects: 1. It is an evaluation of the implementation of a political statement rather than a specific project or programme. 2. It is an evaluation that attempts to capture changes of behaviour across a wide range of national and international actors. 3. It is designed to ensure developing country ownership to the evaluation by relinquishing the usual donor leadership and control, enabling partners to design and execute the country level evaluations within a common framework. 4. It is designed to be decentralized but with a sufficiently strong level of coordination to ensure that the evaluation delivers an effective cross-country evaluation process. The Panel will discuss the organizational and methodological lessons learned from the design and the conduct of this complex evaluation. There will be brief presentations on three subjects followed by discussion. The first presentation will describe how the evaluation was organized and designed to ensure stakeholder ownership, and discuss the strengths and weaknesses of the elaborate governance set-up. The following presentation will cover the conduct of the two types of component evaluations: that conducted at the country level and the one conducted at the donor level and point out the challenges the challenges of synthesizing the findings of the 40 component evaluations which were characterized by varying depth, coverage and quality. It will also discuss the challenges of evaluating the effects of a political statement such as the Paris Declaration. The final presentation covers the main outcomes of an independent metaevaluation of the Paris Declaration Evaluation process highlighting aspects of credibility and utility. Keywords: Joint evaluation; Multi-site evaluation; Utilization focused evaluation; Cross cultural evaluation; Evaluation of the implementation of a political statement;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

197

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S3-31 Strand 3

Panel

Evaluating empowerment: integrating theories S3-31 of change, theoretical frameworks and M&E
O 291

Evaluating empowerment: integrating theories of change, existing frameworks and useful, context-sensitive and credible M&E
Z. Ofir 1, M. Mentz 2, D. Mukhebi 3, V. Mukuna 3, M. Noordeloos 3

Friday, 5 October, 2012

9 : 3 0 1 1 : 0 0

1 2

International Evaluation Advisor, Gland, Switzerland University of the Free State, Bloemfontein, Republic of South Africa 3 AWARD, Nairobi, Kenya

It is increasingly important for the evaluation community to use the power of monitoring and evaluation (M&E) to deepen understanding of change and to design and manage complicated interventions with greater success. Another ongoing challenge is to better understand and measure empowerment a concept often (inaccurately) equated with capacity strengthening. How can empowerment be assessed, and benefits from such efforts attributed? African Women in Agricultural Research and Development (AWARD) is a professional development program that focuses among others on empowering AWARD Fellows through a series of integrated strategies. These are aimed at cultivating better (women) leaders in the sector who are able to contribute more effectively to poverty alleviation. Over the past three years its management team has used systems and theory of change thinking as a basis for real-time monitoring, adaptive management and accountability. They are continuing to test a detailed theory of change against an empowerment framework adapted from the human capabilities and empowerment as expansion of agency work of i.a. Amartya Sen, Linda Mayoux, Solava Ibrahim and Sabine Alkire, with a link to the recent OPHI developed Womens Empowerment in Agriculture Index. The AWARD stakeholders have purposefully cultivated a use-focused yet rigorous approach to understanding and measuring change. Principles of realist evaluation, contribution analysis and triangulation are being used to strengthen the utility and credibility of the work. The panel will unpack the implications of their experiences for M&E practice. Zenda Ofir focuses on the empowerment framework and critical aspects of the consensus-developed theory of change. Marco Noordeloos highlights the key linkages between the M&E system and the theory of change, and how these were used to test and adjust the theory of change. Dorothy Mukhebi will describe how the development of a theory of change and demand-driven monitoring system can strengthen a culture of learning for improvement, accountability and knowledge generation. Valerie Mukuna and Melody Mentz will address the rigor and credibility of the evidence obtained through M&E. Zenda Ofir is a South African born international evaluation specialist, former AfrEA President, ex AEA Board and NONIE steering committee member, and evaluation advisor to international organizations. Melody Mentz is Chief Officer: Centre for Teaching and Learning at the University of the Free State, South Africa. A Fulbright scholar during her PhD studies, she focuses on higher education research with special interests in statistical analysis and research methodologies. Dorothy Mukhebi is the Mentoring Coordinator of AWARD. Before joining the program she was Coordinator of the Regional Agricultural Information Network of the Association for Strengthening Agricultural Research in Eastern and Central Africa (ASARECA). Valerie Mukuna is the AWARD M&E Program Officer. She has strong information management expertise and holds a BSc in Information Sciences. She has nearly completed an MA in Population Studies. Marco Noordeloos is the AWARD Fellowship Manager and M&E Coordinator. With Masters degrees in both Marine Biology and Environmental Management, his professional interests include applying analytical skills, pragmatism and creativity to strengthen and innovate AWARDs M&E efforts. Keywords: Theory of change; Empowerment; Realist evaluation; Contribution analysis; Capacity strengthening;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

198

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S1-09 Strand 1

Paper session

Evaluation for improved governance and S1-09 management II


O 292

Evaluation as a tool for commercial development


M. Teisen 1
1

Technical and Environmental Administration, Documentation and Evaluation, Copenhagen S, Denmark

Friday, 5 October, 2012

9 : 3 0 1 1 : 0 0

For years, evaluation has been used as a tool for developing the citizen-related services in Copenhagen. However, it is a new trend that the same tool is used for supporting a systematic business development. The challenge: Copenhagen is a city in growth. The city is currently experiencing a massive urbanization, and since 1990 the number of families with children has increased by 30 %. The economy is growing, tax revenues are good, but seen in a business context Copenhagen cannot keep up, nationally or internationally. In a national study from 2011 which illustrates the local business climate, Copenhagen ended up number 79 out of 97 possible! Also internationally, it is clear that the city is not on par with the cities with which it compares itself. Together with the adoption of Copenhagens budget for 2012, a two-stringed strategy on business development in the city was adopted. Firstly, the strategy endorses a series of individual initiatives which will ensure business responsiveness and make Copenhagen an attractive city for companies. And secondly, the strategy contains an evaluation scheme with an embedded development requirement to ensure continued development and improvement in future. The most crucial elements in the strategy are 1) accurate measurement and time setting of all municipal tasks performed for private businesses (approvals, handling of applications, etc) in accordance with the political target that the average response time towards private businesses should be reduced with a minimum of 10 %; and 2) the requirement that the average satisfaction with the municipality from private businesses on the contrary should increase by 10 % before the end of 2012. Evaluation system: An evaluation concept has been established which involves all relevant stakeholders, both internally and externally. The concept thus consists of performance measurement, performance management and performance leadership components. As the system is developed with a view to being implemented throughout 2012, the management and leadership parts are being launched at the same time as the first results are received. For the purpose of measuring the processing time, 50 business-oriented regulatory functions distributed throughout the municipality have been defined. For each task, a measurement procedure has been established, and the processing time is measured both at the time of the baseline situation in April and by the first measurement in November 2012. For the measurement of business satisfaction, a wide range of different areas have been identified. Here, the primary emphasis has been put on the areas which the municipality has an opportunity to influence, but also on areas in which the municipality can create less bureaucracy or otherwise ease life for businesses by means of lobbying national legislation. Measurements are carried out in a representative proportion of the trades represented in Copenhagen. The evaluation is conducted by the Joint Evaluation Unit in the Technical and Environmental Administration.Subsequent to each of the two separate measurements, a follow-up scheme for 2012 will be launched in order to meet the politically set targets. Keywords: Performance Management; New use of Evaluation; Commercial development;

O 293

From targets to indicators or the other way round? The use of indicators to monitor performance of the Flemish government.
D. Verlet 1, G. De Schepper 2
1 2

Research Centre of the Flemish Government, 1000 Brussels, Belgium Flemish government, Department of Administrative affairs, 1000 Brussels, Belgium

At the moment, the Flemish government undergoes an efficiency and effectiveness cure. The financial economic crisis sheds another light on several changes. It surely rises, more then ever, another dimension in the debate: reaching more with less resources. With our contribution we wish to analyse the situation for the Flemish government more closely. In the wake of the New Public Management-debate, there was and is a renewed interest in good governance. This trend was driven by political and ideological changes. This also led to a renewed attention for wat good governance is about, stressing the performance of the government. This refreshed good governance idea was introduced in the Flemish administration as from 2000, following the New Public Management-principles. The Better Administrative Policy started in 2000 and was implemented in 2006. The idea was a complete rupture with the past, in terms of organisational structure and processes. In spite of an identical structure in each policy domain, an amalgam of more then 70 entities were created. In 2006, the Flemish Government embarked on an ambitious project: Vlaanderen in Actie (ViA) Flanders in Action. Flanders resolutely directed its attention farther into the future, towards 2020. Flanders must assume its rightful place among the very top regions in Europe, economically, socially and on the ecological plane. The financial-economical crisis, as from 2008, pushed the Flemish Government into a vast multi-annual programme (MAP) for its public service administration. At the same time, small cutbacks in the budget are made everywhere in the Flemisch Government.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

199

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

We note that a multitude of indicators is used in order to depict the performance of the Flemish government. Moreover, those selected indicators partly differ pending on the actors using them. As such, this can be situated in the normative debate on the selection and the use (and misuse) of indicators. Another important finding is that the selection of the indicators often precede the actual policy targets, while the logic demands it the other way round. In this way, the actual situation does not match with the theoretical ideals. In the paper we focus first on the theoretical background of the several concepts central to this paper and their definition (performance, efficiency, effectiveness, (principles of) good governance). Besides there are some reflections on the use of indicators in order to measure performance.

S1-09

However, we emphasize the way indicators are selected and designed by the different actors in the context of the Flemish government. For instance there are the similarities and differences in the design, selection and use of indicators by the different actors as involved in the Commission on Efficient and Effective Government, the Multi Annual Programme, the general policy agreement and the different instruments in the management information system and the way this information is available and used (or not) by the policy level. Doing so, we can say more about the way performance information is selected, monitored and used by different organizational units related to the Flemish government. Keywords: Performance; Efficiency; Indicators;

Friday, 5 October, 2012

9 : 3 0 1 1 : 0 0

O 294

Like, tweet, tag and poke all the way to the voting polls Social networking in South African governance
J. Pretorius 1, S. Gopal 2
1 2

University of Witwatersrand, Johannesburg, Republic of South Africa University of Kwazulu Natal, Centre for Communication Mediam and Society, Durban, Republic of South Africa

South African President Jacob Zumas state of the nation address (SONA) trended on Twitter as the president asked citizens to pose questions online before he prepared his speech. His use of those questions from social media platforms turned his state of the nation address into an online event. This participatory process, to a large extent sent a positive message to the populace, particularly those he engages with online. Facebook users increased from 1.4 million in 2009 to 2.5 million in 2010 representing a 79.5% annual growth rate, according to Facebook statistics. The growing trend of using social platforms in political processes was powerfully demonstrated by President Obama in his election campaign in 2008. This practise spread globally and has been demonstrated In Africa in a number of prolific ways; to support social movements, such as those among the Arab Spring witnessed in 2011 and presidential campaigns such as that of President Goodluck Jonathan in Nigeria. South Africa too, although to a limited extent, has begun to utilise these platforms in a number of political activities. Much like globalisation, these platforms have an intrinsic binary nature and whilst it unites communities on the one hand (inclusionary) or also divides in terms of those without the means and access to internet (exclusionary). How these cleavages are managed is very important if governments aim toward interconnectedness of communities. Several key constraints such as the current Protection of Information Bill before parliament are vehemently opposed and seen as a blaring contradiction to transparency and openness which are seemingly central to South African governance. The key questions however are a) how these activities have been used politically? b) have they been effective? And c) to what extent have these platforms been used to enhance governance? Notwithstanding that the use of social media is still be a fairly small part of the total population (49 million) and it is still a very new tool within the media space, evaluation methods are still a very conceptual stage. The report will evaluate the South African State of the nations address (SONA) 2010 & 2011 in order to assess the usage of social media and its effectiveness in governance.

O 295

Role of information and communication technologies in monitoring and evaluation-implications on performance management and capacities in Developing Countries
E. Georgieva 1
1

European Commission, DG Enlargement, 1040 Brussels, Belgium

Short bio: I am currently working as an Evaluation Officer at the EU Commission, DG Enlargement. I am task manager for a number of evaluation assignments, including country programme evaluations, multi-country thematic evaluations and internal evaluations. I am a graduate of the University of Maastricht (MA in European Public Affairs) and I am currently pursuing an MA in Development and Governance from the Centre Europen de Recherches Internationales et Stratgiques, Brussels. Rationale: In todays globalised and digitalised world, the use of new information technologies is ever increasing and penetrating all aspects of life. In the field of Monitoring and Evaluation (M&E), information & communication technology (ICT) is acquiring an important role as well by reinforcing the efficacy of data management and processing processes. ICT capacity and using informatics for better and timelier information on results have been also identified by the World Bank as one of the four pillars of capacity development in M&E. The idea of reinforced country capacities in M&E as an aspect of good governance in the public sector and an important performance management tool is also in line with the general commitment to focus aid on results. Therefore, it would be useful to explore the opportunities which ICT could offer in M&E and also the implications this might have on performance management as a whole and capacity development in Developing Countries. Objectives: To add to the debate on the role which ICT can play in todays networked world in strengthening Developing Countries performance management systems through reinforcing their monitoring and evaluation capacities.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

200

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

Brief narrative and justification: The chosen topic addresses the overarching theme of the Conference and is of direct relevance to the evaluation community as the paper will explore the opportunities which new information technologies offer to strengthening country M&E systems and capacities. It will also discuss how new technologies can contribute to improving the relevance, efficiency, effectiveness and impact of (monitoring and) evaluation results in developing countries in the context of the renewed consensus on aid effectiveness.

S1-09

The added value of the paper will be to improve the understanding of evaluation practitioners, managers, commissioners and/or users about the benefits (and de facto needs) to utilise better modern technologies in both 1) monitoring and evaluation as indispensable tools of performance management which are part of the broader results based approach to public programmes and policies and 2) country systems as a whole. The recent events of the Arab Spring are but one example of how powerful new technologies can be in reaching out to people and spreading knowledge and information in a cost-effective way. This potential can be tapped and used to facilitate the building up and functioning of M&E systems in developing countries. Keywords: Capacity; Development; Evaluation; Performance; ICT;

Friday, 5 October, 2012

9 : 3 0 1 1 : 0 0

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

201

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S1-01 Strand 1

Paper session

Open source, data exchange and evaluation


S1-01
O 296

Swarm Intelligence or Lost in the Crowd. Open Data, Public Scrutiny of Public Action, and How Evaluation Changes
L. Tagle 1, A. Pennisi 2
1 2

Evaluation Unit of Regione Puglia, Bari, Italy Italys State Budget Department, Rome, Italy

Friday, 5 October, 2012

9 : 3 0 1 1 : 0 0

The paper aims at exploring the consequences of the gradually increasing availability of Open Data for evaluation as we know it. Using concepts from the literature on evaluation and democracy, it contends that new technologies both require a new behavior by evaluators and open up possibilities in the very framework in which evaluation is done. The pressure to open up data changes the way governments and public sector offices conceptualize, produce, and disseminate data. Responding to this demand requires that internal procedures change in fundamental, still partially unexplored ways. Issues arise also for citizens seeking information. They face a rapid growth of internet-based sources, which both creates opportunities for research and difficulties in assessing data quality, credibility, and usability. It also implies that public interventionsbe they programmes, projects, or servicesare open to public scrutiny of a new, more informed type. It increasingly involves expert, non-expert, and differently-expert scrutiny. It is unlikely that Open Data will ever provide allor even mostinformation needed for an evaluation. There is a risk that, in addition to opening up new research avenues and framing new evaluation questions by new actors, the availability of great masses of data on public policies obscures the need to directly observe effects and to build credible theories about phenomena. The very existence of open data, and the possibilities they open up to public scrutiny call into question the role of internal and external evaluators. This is even more so when thinking of the opportunities opened by the ability to conjure collective intelligence in evaluation processesusing concepts already developed in the participation tradition. The paper explores these themes based on an on-going research project. The two authors are involved in the Open Data movement in Italy and will advance their research during the next months through their work, research on existing literature, and holding workshops (e.g. within the Sapienza Seminar on Classic Evaluation Theorists). Greene, J. (2006). Evaluation, Democracy, and Social Change in Ian F. Shaw & Jennifer C. Greene & Melvin M. Mark. The SAGE Handbook of Evaluation. Sage: Thousand Oaks and, by various authors, the essays collected in Greene, J. and L. DeStefano (eds.) Evaluation as a Democratic Process: Promoting Inclusion, Dialogue, and Deliberation in New Directions for Evaluation. N. 85. For example,.Nabatchi, T. (2012) A Managers Guide to Evaluating Citizen Participation. The IBM Center for The Business of Government; Haahr, J. H. (2004) Open co-ordination as advanced liberal government. Journal of European Public Policy 11:2 April 2004: 209230, Lessig, L. (2009) Against Transparency. The perils of openness in government. The New Republic (http://www.tnr.com); Keywords: Open Data; Evaluation Capacity Building; Evaluation as a Profession; Participation;

O 297

Using Open Source Survey Tools for Qualitative Inquiries on Educational Development at a Distance Online University
C. Bosse 1, C. Ives 1, D. Briton 2
1 2

Athabasca University, Center for Learning Design and Development, Edmonton AB, Canada Athabasca University, Faculty of Humanities and Social Sciences, Edmonton AB, Canada

This paper reports on two open source survey tools that were used to gather data related to Athabasca Universitys (AU) educational development activities within a qualitative evaluation framework. First, a Moodle questionnaire module was used to assess the educational development needs of faculty. In another instance, Lime Survey served to gather qualitative information for an expert review on the usability of course learning objects from both a technical and pedagogical dimensions. A comparative review of both online tools will be provided from an educational development perspective. It aims to analyze the multiple uses of evaluative instruments as part of a broader discussion on utilization-focused evaluation in the context of Higher Education projects. Open education is an integral part of Athabasca Universitys organizational culture as one of the pioneering online and distance teaching universities. Therefore, there is a strong institutional support for open source tools such as Lime Survey and Moodle, which is the universitys learning management system (LMS). The databases and servers for each tool are hosted within different units of the Canadian Open University. This level of technical integration within the institution makes it easier to access and use these open source survey tools as part of the academic practice for both faculty and professionals. Within this institutional context, integrating open source tools to conduct qualitative inquiries on recent educational development initiatives sponsored by AUs Centre for Learning Design and Development can be viewed as a strategic alignment towards supporting innovative teaching and learning activities. In fact, one of the rationales for using Moodle to conduct a needs assessment was building on AU facultys familiarity with the LMS to raise their awareness about the Moodle questionnaire module. One of the outcomes is to make use of this feature to gather additional qualitative feedback from students to enhance course design and development. Similarly, the expert review conducted through Lime Survey provided an opportunity for faculty and professionals to test the tool as well as responding to the object of the qualitative inquiry focused on improving future course learning objects design.
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

202

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

Although both qualitative evaluation projects were different in terms of objective and scope, one of the reasons for using these Web-based open source survey tools stem from an institutional commitment to accessibility, flexibility and, ease of use. This factor could have an incidence on participants responses and emerging findings from both online qualitative inquiries. At this exploratory stage of the comparative review, it is anticipated that Moodle and Lime Survey will be embedded as part of AU systematic research-based responses to appropriately identify and address educational development needs and challenges.

S1-01

Keywords: Open source survey tools; Online qualitative evaluation; Educational development; Utilization-focused evaluation; Distance online university;

O 298

Dinosaurs versus androids out with the old and in with the new?
Sally Duckworth 1, Liz Smith 1
1

Litmus, New Zeland

Friday, 5 October, 2012

9 : 3 0 1 1 : 0 0

In todays global and networked environment, stakeholders and beneficiaries contributing to evaluations often wish to engage in ways that are accessible and convenient to them, as well as being authentic and meaningful. Commissioners of evaluation are also requiring value for money and time conscious evaluation approaches that often preclude resource intensive research approaches of telephone surveys and traditional face-to-face qualitative methods. These requirements have led many evaluators to develop and undertake cost effective innovative electronic approaches to data collection and reporting. Often these new approaches have led to increased levels of stakeholder and beneficiary engagement and more affordable and timely results for evaluation commissioners. However, careful consideration needs to be given to these approaches in relation to validity and reliability of data collected and its use within process, outcome and impact evaluations. Consequently, in adopting these innovative electronic data collection tools, evaluators need to weigh up the advantages and disadvantages in the context of the evaluation being undertaken, and the other data collection tools being used. This presentation will draw upon recent examples of social media and online approaches that the presenters have undertaken to gather data in this changing world, as well as their use in the interpretation and validation of evaluation data. It will explore the value and critique the use of online qualitative forums taking the conversations back to consumers in their own space and time, focus group data streaming allowing observers to view research discussions remotely in the privacy of their offices, e-reporting enabling stakeholders to contribute to the reporting process online and webinars enabling evaluators to present research data and findings to stakeholders across the globe. This presentation will conclude that online and social media approaches do have value and an important place in the evaluators toolbox. However, like all methods, they need to be carefully reviewed in relation to their benefits and relative tradeoffs for specific evaluations.

O 299

Evaluation of Swiss Federal Data and Information Act and of data exchange between public entities
W. Bussmann 1
1

Swiss Federal Office of Justice, Bern, Switzerland

Innovation in the field of information technology (increasing storage capabilities, RFID-technology, data tracking in the internet, data warehouse, data mining etc.) creates considerable challenges for data protection. To take stock of strengths and weaknesses, an evaluation of the Swiss Data and information Act was commissioned in 2011. It consisted of expert interviews (with firms in the field of information technology and with the Swiss Federal Data Protection and Information Commissioner), of a population survey on values and practices of internet use (1014 persons), of a study of ten data protection cases treated by the Swiss Federal Data Protection and Information Commissioner, of an international legal study and of a detailed legal study of 269 court cases regarding data protection. The latter legal study examined which legal instruments (paragrafs of the Swiss Data and information Act) were used and whether the complaints were successful or not. The evaluation report showed a low use of the existing legal possibilities for data protection on the one hand. Actions of the Swiss Federal Data Protection and Information Commissioner on the other hand were rather effective. This has to do with the high risk of image damage of large firms and of fears of the public administration to lose credibility; the large firms and the public administration comply with regulation in order to maintain their reputation and credibility. In the population, use of information technology along with data security and data protection are deemed important. In practice, however, users of information technology are not coping well with the challenges and expectations related to data protection. They are reluctant to rely on their rights given the risks and costs associated with it and given the relatively low return in case they are on the winning side. In the paper, the methodology and the results of the evaluation of the Swiss Data and Information Act and of related other evaluations (data exchange within the public administration) will be presented. Given the ubiquity of information technology and its impact on the lives of its users, the use of appropriate evaluation methodology will be discussed. Keywords: Data protection; Evaluation of legislation; Information society; Switzerland;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

203

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-33 Strand 2

Panel

Joint evaluations: Advancing theory and practice


S2-33
O 300

Joint evaluations: Advancing theory and practice


A. Williams 1, M. Moss 2, A. Ruddy 2, P. Muller 2
1 2

NATO, Allied Command Transformation, Norfolk, USA Indiana University, Center for Evaluation and Education Policy, Bloomington, USA

Friday, 5 October, 2012

9 : 3 0 1 1 : 0 0

Evaluations have traditionally been arranged as principal-agent relationships between a management, leadership, funding or accountability body, and the evaluating body. However, with the acceleration of the internationalization of evaluation and increasing drive for cost-efficiencies, recent practice has seen the emergence of joint evaluations involving more complex arrangements between multiple principals and agents. This trend is evident from the increasing numbers of joint evaluations of humanitarian actions, environmental and natural resource management programs and international development cooperation and assistance programs. Despite this proliferation, however, there is little theoretical work on joint evaluations that considers the variables important to collaboration and their impact on the process and methodology of evaluation. Furthermore, new information technologies and the advent of social networking without borders continue to create unprecedented opportunities for multi-operational, cross-cultural joint evaluation efforts, significantly increasing the need for more systematic and theoretical approaches to this evolving field. The panel session directly addresses this gap by increasing understanding of current best practices in joint evaluations, and providing a theoretical framework that advances understanding of the current and future roles of joint evaluation in a networked society. The goals of the session include both addressing the current landscape of joint evaluations by reviewing literature on theory and best practice; as well as advancing theory and practice via application of multidisciplinary perspectives on the collaborative process of joint evaluation, development of a typology for classifying joint evaluation, and development of a theory-based approach to joint evaluation. The panel chair, Andy Williams, will open by providing a context for the critical role of joint evaluation in current and future work in the field. In the first presentation, Best Practices and Lessons Learned from Joint Evaluations, Dr. Marcey Moss will provide a synthesis of relevant findings from an empirical review of joint evaluations. Topics covered will include: the rationale for conducting joint evaluations; the necessary pre-conditions; the challenges and transaction costs involved; structural issues of governance and administration, political economy issues, and social capital issues. Next, Dr. Annie Ruddy will discuss Collaboration from a Multidisciplinary Perspective: Developing a Theoretical Framework for Understanding the Collaborative Process of Joint Evaluations. By drawing on the diverse multidisciplinary research and literature on collaboration (e.g., public administration, sociology, business, political science, social sciences), Dr. Ruddy will provide insight into the complex nature of the collaborative process and establish the foundation for a more systematic approach to joint evaluations. For the final presentation, Developing a Theory-Based Approach to Joint Evaluation, Dr. Patricia Muller will propose a theorybased framework for joint evaluations grounded in both multidisciplinary collaboration research and relevant evaluation theories (e.g., stakeholder-based evaluation, participatory or collaborative evaluation); and will discuss a proposed typology for classifying and understanding joint evaluations. At the conclusion of the panel presentations, the chair will discuss future directions for empirical research and further explanatory theory, and will engage the audience in discussion of critical issues related to joint evaluation. Keywords: Joint evaluation; Collaboration; Joint evaluation theory; Multi-organizational evaluation;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

204

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-10 Strand 2

Paper session

Evaluation in complex environments II


S2-10
O 301

Choosing appropriate methods from the systems and complexity field for evaluations
R. Hummelbrunner 1
1

OEAR Regionalberatung, Graz, Austria

Friday, 5 October, 2012

9 : 3 0 1 1 : 0 0

In recent years, interest in approaches from the systems and complexity field has been growing. But application often lags behind this interest or is done in a rather ad-hoc way. Part of the problem is that the range of systems methods is so large that its difficult to select methods that are appropriate to a particular situation. This poses a particular challenge for evaluators who are interested in using concepts from the systems field, but lack in-depth knowledge or the time to investigate further. The issue has been taken up in a recent new book by Bob Williams and Richard Hummelbrunner Systems Concepts in Action: A Practitioners Toolkit. It proposes to use principles and methods as a guiding framework, based on three core systems concepts: Interrelationships, perspectives and boundaries. They also reflect the main waves in the development of systems ideas over the past fifty years. Identifying appropriate systems approaches for a particular evaluation can either be done at the level of principles or at the level of specific methods or techniques. The current paper provides a heuristic for such a procedure, which foresees two distinct but inter-related steps: First, the three core concepts are explored in more detail and expressed through a set of guiding questions that can be applied without in-depth knowledge of systems concepts. These questions can be matched with the issues or questions that a particular evaluation tries to address. This permits identifying which of the systems concepts are of particular relevance and the aspects that can be addressed. If operating at the level of principles is insufficient for the task at hand, then the utility of specific methods, methodologies, and techniques from the systems field needs to be investigated. Again this can be done by using the route of questions. Various systems methods can be grouped around types of evaluation questions, which permits a more detailed consideration of the kinds of issues that particular systems methods address. Taken together, this heuristic can guide evaluation practitioners to choose systems methods that match their skills and the situation at hand. And to ensure that the selection of a particular method is based on the specific properties and aspects for which it is best suited. Furthermore, a question orientation encourages the use of multiple approaches, and to identify which systems methods or even elements of systems methods suit the situation and intended purpose. This also underpins a growing trend in both the systems and the evaluation field to bring multiple methods and methodologies to bear on an individual inquiry. This also means that systems methods can and should be used alongside other methods used in an evaluation. Keywords: Systems methods; Complexity; Multiple methods; Evaluation questions;

O 302

Level 3, The Food and Agriculture Organization of the United Nations capacities and challenges in responding tomega-emergencies
N. Morrow 1
1

Tulane University, Public Health Law Social Work, New Orleans, USA

Humanitarian organizations increasingly assign a level to a crisis in order to activate pre-determined internal procedures and resources. The term Level 3 is used by the Interagency Standing Committee, the primary forum for the coordination of Humanitarian response by the United Nations and partners, to designate an emergency response of a scale that requires humanitarian organizations to leverage the full extent of their corporate capability. While the Food and Agriculture Organization of the United Nations (FAO) has responded to crises of this magnitude and will certainly be required to again in the future, the Organization until now has not had a process to declare a Level 3 emergency response, nor a complete set of procedures and mechanisms to support such a response operation. This paper presents a thematic review of FAOs capacities and challenges in responding to mega-emergencies that led to institutional change. A participatory approach to policy development was built around a thematic review of independent evaluations of FAO emergency response, systematic humanitarian after action reviews of recent responses, and more than 60 stakeholder meetings. The goal of the activity was to develop a corporate level standard operating procedure for FAO to enhance the timeliness, coordination and effectiveness of future responses to large-scale disasters and crises. Twenty volunteer staff formed the core of the policy and procedure development initiative. The team identified key functional areas to elaborate more standard procedures following guidance from ISO 9001 Quality Management Approach. Several key procedures were then elaborated for each functional area. Systems analysis was used to identify gaps in practice and make recommendations for enhanced capacity for Level 3 emergency response. The final output of the process was a FAO corporate standard operating procedure and policy statement endorsing the procedure by the Director General effectively institutionalizing the recommendations and identified better practice. Keywords: Thematic Review; Humanitarian Response; Food Security; UN Coordination;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

205

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 303

Lessons from the Title II Food Security Multi Year Assistance Program (MYAP) final evaluation in Uganda
M. Oturu 1
1

ACDI/VOCA Uganda-LEARN, Monitoring and Evaluation, Soroti, Uganda

S2-10

This paper presents findings of the ACDI/VOCAs MYAP end of program evaluation. The purpose of the evaluation was to determine the relevance, effectiveness, efficiency, coherence and sustainability of the intervention. ACDI/VOCA implemented MYAP program in partnership with Africare, Lutheran World Federation (LWF), The AIDS Support Organization (TASO) and other local sub grantees. The program objective was to reduce food insecurity and increase nutrition status of 170,600 farmers and distribution of supplementary food to 53,100 vulnerable people. TANGO International an independent firm conducted final evaluation. The final evaluation collected primary data from the project participants and reviewed program documents. Although findings show that a number of targets where achieved, there are a number of lessons which can be useful when designing similar interventions in future. Introduction: The food security status of households in Uganda is determined by the interplay of natural and human resources, the viability of livelihood systems and survival strategies, the politics of resource allocation and use, and the impact of development intervention. Sharp variations among microclimates profoundly shape livelihood systems and strategies affecting household food security status in the program area. Methodology: The final evaluation comprised three elements: a quantitative survey of households within the program intervention area, focus group discussions with project participants, and interviews with staff of partner organizations and sub grantees. The first two components were designed to obtain information about project outcomes and impacts within the population as a whole and from project participants in particular. The third component was to provide the evaluation team with primary information to assess program implementation and management issues. Findings: The program design did not adequately account for the transition process from living in camps to returning home. This affected program momentum and stretched staff resources. There was a delay in start up of program activities due to time-consuming proposal review, institutional vetting and contracting processes for the multiple sub grantees required to meet the program targets. The relationship with Africare and LWF was not in line with the role commonly prescribed to MYAP partners, with little involvement by both partners in strategic and programmatic decision making. The internal ACDI/VOCA structure did not make efficient use of the various comparative strengths that the organization aimed to put to use for this program. Sub granting can be a very effective channel to build sustainable institutional capacity if the institutional development of the sub grantees is included squarely into program design; and is acknowledged as a program objective in its own right with appropriate indicators. Conclusions: In conclusion, the program was able to achieve the broadest output targets, however, some targets in terms of specific types of training, were not achieved. The outcome indicators show that there was a general decline in food security conditions from the time of the baseline. The program has been quite effective in increasing households participation in savings and credit groups, from less than 20 percent in the baseline to almost 60 percent in the final survey round. Keywords: Abstract; Introduction; Methodology; Findings; Conclusions;

Friday, 5 October, 2012

9 : 3 0 1 1 : 0 0

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

206

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S3-08 Strand 3

Paper session

Capacity Development: Learning from experience II


S3-08
O 304

Building monitoring and evaluation culture in decentralised and networked Governance: Case based reflections from Finland
O. Oosi 1, N. Korhonen 1, R. Karinen 1, K. A. Piirainen 1
1

Ramboll Management Consulting, Finland, Helsinki, Finland

Friday, 5 October, 2012

9 : 3 0 1 1 : 0 0

The research problem for this paper is to describe the adaptation to proposed M&E systems when they are built in decentralized and networked governance. This paper has also policy-relevance as it will enable the development of practical tools for building M&E systems in a decentralized administration. It is our practice-based finding that the best practices and conventions proposed in literature for building M&E systems are difficult to implement in decentralized and network governance, as the resources used for measures are difficult to distinguish, commitment of actors part of the policy varies and the goals of the policies are contested within the network. Both above mentioned aspects are dealt within the literature of administrative and social sciences in theoretical and conceptual perspective. Instead of taking a purely theoretical position, this article analyses the consequences based on three practical cases from different policy fields in quite decentralised and networked administrative setups, such as Finland. Three practical cases are based on the very recent developments in the field of high level M&E, namely the examples are Governments sports policy evaluation system; Indicators for science, technology and innovation and related monitoring and evaluation activities and lastly monitoring and evaluation of migration, integration and ethnic relations. All three are examples of piloting monitoring and evaluation culture with cross-sectoral reach, including policies and actions from various actors in Finnish society. In the Finnish case, result based monitoring and evaluation seems to be a combination of concepts, which in this particular context does not follow any particular school of practice developed by scholars of the subject such as John Mayne, Ray Rist, David Hunter or others. Motivation for launching the M&E exercises comes from the performance management. The first part of the paper reviews the theoretical discussion regarding launching of monitoring and evaluation systems in networked governance. It explores current debate in the literature and points out the lack of connection between the performance management, evaluation and governance studies literature. It sets the framework of the analysis based on the key fundaments of M&E approaches, the usual suspects of noted challenges for building M&E (such as Mayne 2009) and finally, it notes the cultural aspects built in the tenents of these schools of thought. Based on the work of Dvora Yanow and Frank Fischer The empirical part of the paper analyses the above mentioned three examples against the traditional view of key components of succesful M&E and usual challenges pointed out in the literature. Based on these examples the paper proposes, that contructing meaningfull M&E systems for networked governance there is need for: a) complex and unorthodox processes for stakeholder interaction in both building and utilising M&E information b) weird ways to collect data and adjust it to the political debates within networked policy fields and c) either deliberative ignorance or inadequate methods to apply input/financial information to these systems. The conclusive part of the paper reflects the approaches needed to tackle the particular challenges relating to M&E in networked governance. Keywords: Monitoring and evaluation; Evaluation culture; Network governance;

O 305

An Update of the International Atlas of Evaluation: A Comparative Perspective from a Decade later
S. Speer 1, S. Jacob 2, J. Furubo 3
1 2

Independent Evaluator, Wiesbaden, Germany Universit Laval, dpartement de science politique, Qubec, Canada 3 Riksrevisionen, Stockholm, Sweden

Public administrations in modern welfare states have been historically exposed to similar pressures and influences. More and more evaluations are undertaken and due to external and internal pressures the institutionalization of evaluation has been strengthened since then. In many countries, this has lead to modernization processes at different levels of government, such as more widespread evaluation practices as well as evaluation functions. Some country studies have been published in recent years (e.g. Bussmann 2008, Leeuw 2009), however systematic comparative research across countries is still at a relatively early stage. The International Atlas of Evaluation (Furubo et al. 2002) gives the first systematic comparative overview on evaluation cultures and institutionalization of evaluation. Evaluation cultures have been described and analyzed within a framework of selected indicators measuring nine dimensions. Now a decade later, a comparison of these original findings to current developments in national developments is undertaken. Our research looks into trends from the last decade. We present the results from a recently conducted expert survey with more than eighty evaluation experts in twenty OECD countries. The experts gave their views for their respective countries along the nine indicators and additional explanations for changes, triggers, and utilization. The results will be presented and discussed. Furubo, J.-E., R.C. Rist, R. Sandahl (eds.) (2002), International Atlas of Evaluation, Transaction Publishers. Keywords: Institutionalization; Evaluation systems; Evaluation culture;
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

207

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 306

Professionalizing monitoring and evaluation (M&E) training for international development program managers
C. Elkins 1
1

Belling the Cat LLC, Hillsborough NC, USA

S3-08

Evaluation success is enhanced when program managers have designed and implemented interventions with sound understanding and practice of monitoring and evaluation (M&E) foundations. Increased recognition and promotion of independent evaluations importance, however, threatens to drown out the essential importance of internal project M&E basics such as intervention theory, results measurement, and evidence-based management decisions. The best intentions of international development managers do not replace internal program design fundamentals, many of which are bundled with core M&E principles that entail use of sound theory in problem analysis and intervention design, and integrated measurement systems with valid and reliable M&E elements. This paper presents lessons learned from experience teaching M&E to international development professionals within a mid-career graduate program. Teaching professionals fundamental strategic design, pragmatic and cost-effective measurement methods, and all of the nuts and bolts of project results measurement systems, is a major challenge within the time limits of a workshop or other training event. Covering everything professionals need to know is difficult even during a single semester, and is actually more complicated when the students themselves are diverse mid-career professionals with some experience in international development. A background outline of appropriate M&E system design and related course content for graduate student consumers will be discussed first, with a variety of tested semester curriculum designs then explored. Additional depth beyond the scope of a single-semester course draws on development of a graduate level M&E textbook. Finally, recommendations for course development according to different student categories and certificate or degree context are presented. Elements include: The full scope of a properly integrated project M&E system design The context, demands, and constraints of teaching mid-career students or professionals Course topics and approaches (certificate; degree) Alternate semester designs Recommendations and further questions to explore Keywords: International development; Evaluation training; Monitoring and evaluation (M&E); Professionalization;

Friday, 5 October, 2012

9 : 3 0 1 1 : 0 0

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

208

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S1-15 Strand 1

Paper session

Social networking, network associations and S1-15 evaluation II


O 307

Evaluation in the context of a network like association


L. Soberon Alvarez 1
1

Pontificia Universidad Catlica del Peru, Ciencias Sociales, Lima, Peru

Friday, 5 October, 2012

9 : 3 0 1 1 : 0 0

In this paper I will discuss my experiencies in evaluation carried out in Peru (South America), with regard to programs designed and implemented by network like associations built by civil social organizations in order to promote overaching development goals and public policies change. The network like association is intentionally formed by a group of independent organizations, with formal objetctives and regulations, and in some cases legally constitutes. Most cases are composed just of organizations, but there are also cases of a mix composition including organizations and individuals. In the last year, some of these networks like organizations have been formed under the influence of international financing institutions from the North. The network like organization is considered a strategic mean to increase in resources, capacities and power to effectivily produce the expected changes. In the evaluation of the programs that are designed and implemented by these networks like organizations it is important to look at their outcoms and impacts, but also to look at their networking job, internally with respect to the relationships among their members, and externally with regard to the wider local, national and internationa communities and actors. Drawing in several experiences of evaluation, I would present and discuss a set of methodological observations relevants to the field of evaluation, combining three lines of analysis: Organization, network and system analysis. Keywords: Network; Networking; Systems; Evaluation;

O 308

Creating an evaluation association: the case of Albania


F. Luli 1
1Albanian Association of program evaluation (AAPE), Shkoder, Albania

Today, Albania is a parliamentary democracy with a free market economy which has started the transition to democracy in 1991. Even severe internal problems lies ahead for Albania, the country is progressing to the EU standards. In this context, the development of evaluation becomes a condition and an evidence of good governance, transparency and accountability. Supporting in such a context the emergence of an association in evaluation, the Albanian Program Evaluation Society (APES), is a big challenge. The overall aim of this presentation is to describe the context of bringing the APES into life, and to diagnose and analyse the main strategies used, the challenges faced and the first results obtained. A case study approach was used to gather an in-depth understanding of the implementation, functioning and growth of APES and to evaluate some of related issues in the country. In a few words, the first step was taken in July 2011 when nine professionals created an informal network and discussed the need for and ways to set up a national society. Then the purpose of the future society was clarified and the reasons for establishment identified. When a wide consensus was reached, the group embarked on a process of formalization. Legally established in October 2011, APES develops its first strategic plan focusing on the need to develop and institutionalize evaluation in Albania. Finding the right way is a major issue considering that the establishment of the APES is stimulated neither by Albanian government, nor by local academics or international donor organizations. In addition to this, evaluation in Albania is not yet established as an academic discipline, no policy on evaluation exists just like there are no accreditation criteria for evaluation training and practice. In this context, the presentation demonstrates how the international cooperation can be useful in strengthening the organisational and structural capacities of APES. For our small group facing this task, the support of external partners and inter-organizational linkages is still crucial. Valuable support is actually received from SQEP, NESE and IOCE. Beside these challenges, this presentation captures several topics relevant to the organizational capacity: structure and resources. Aspects such as leadership, membership, services, finances and human resources are especially relevant once our organization needs to formalize its activities and structure. In the same order of ideas, through an analysis of the APES logic framework, this presentation sheds light on some essential factors for the national development strategy of our evaluation society and for program evaluation in Albania. Establishing and maintaining an evaluation association requires time and patience and we agree with the assessment that slow, step-by-step success is always better than fast failure in order to build an evaluation community with a clear-defined statement of vision and mission shared by the founder group. Finally we will appreciate to share our experience with other national evaluation. Within a participatory paradigm, this exploration also reveals the need for the communication services to set aside the assumption that the sharing of information and exchange of methods, approaches and lessons in evaluation is an important and essential element of strengthening national capacities. Keywords: Albania; Program;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

209

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 309

Evaluation of aid effectiveness through civil society networks: challenges in Nigeria


T. P. Adesoba 1
1

Girls To Women Research & Development Centre, Monitoring Evaluation & Reporting, Ado Ekiti, Nigeria

S1-15

Introduction: Networks are well respected associations of civil societies, especially Non-governmental Organizations, in Nigeria. The recognition possessed by Networks in Nigeria is a huge one such that donors find them the most dependable body to entrust with huge funds because they are not built around an individual. Networks are usually formed by the coming together of organizations that share like thematic areas. Examples are the CiSHAN (Civil Societies in HIV/AIDS in Nigeria), AONN (Association of Orphans & Vulnerable Children Network in Nigeria), TB (Tuberculosis) Network among others. Members of these networks share same thematic area and have same goals. These networks when given grants to implement projects disburse the funds among member organizations within their network. Description: One identified challenge confronting civil society networks in Nigeria is that many do not have the capability for impact evaluation and therefore do not participate efficiently in determining aid effectiveness. This may be because the donors do not actively involve them in the evaluation of their interventions in Nigeria and also member organizations do not possess the necessary capabilities for evaluation. The determination of aid effectiveness should be anchored by civil societies because they will likely be more forthright and unbiased in their judgement added to the fact that they are closest to the grassroots where the projects are implemented. Lesson learnt: Aid effectiveness is difficult to determine in the absence of participatory evaluation of donor-funded projects actively engaging civil society networks. Civil societies that are seen as non-partisan, non-political and entrusted with funds for development projects are very relevant in the impact evaluation of development projects. Donor organizations should prioritize the involvement of civil society networks in the evaluation of their projects and strengthen the capacities of these networks in that regards to identify gaps in aid spending and provide necessary information for the future of such projects or their replicability. This process makes evaluation transparent, indigenous and entrenches community ownership of projects which is a basis for sustainability. It also permits local Civil Societies to provide useful information related to the prevailing socio-economic and political situation of projects site which an external evaluation consultant may not know. On the contrary, the process may be economically demanding as more participants in the evaluation process could result in more spending. However, the pros outweigh the cons. Recommendation This paper therefore proposes that greater involvement of civil society organizations in evaluation of donor-funded projects will help to transparently determine the relevance, efficiency, effectiveness, impact and sustainability of development interventions in Nigeria. This is can be achieved if donor agencies expand the scope of their Monitoring/Evaluation Capacity Development programmes for local partner Civil Society Organizations beyond the usual formative/process evaluation which are applicable only during the project life to impact evaluation which is relevant long after the project ends.This also suggests that donors should maintain (or continue to maintain) long-term relationships with local Civil Society Organizations if they are to be involved in Impact evaluation. Keywords: Aid Effectiveness; Evaluation; Network; Capacity Development;

Friday, 5 October, 2012

9 : 3 0 1 1 : 0 0

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

210

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S4-14 Strand 4

Paper session

Food security and livelihood protection evaluation


S4-14
O 310

Effects of Marginalization on Effective Project Implementation. A case of Drought Resilience projects among Pastoralist and Agro pastoralist communities, Kenya.
A. Jaboma 1, D. Kinda 2
1

Friday, 5 October, 2012

9 : 3 0 1 1 : 0 0

University of Nairobi, Extra Mural, Nairobi, Kenya UCAD Dakar, Development Studies, Dakar, Senegal

Lack of recognition or giving voice to the women and youth in the community is demoralizing positive outcomes and efficiency in project implementation affecting desired impacts and results. Policies that advocate for increased integration of the marginalized are integral for development. This study seeks to explore the effects of marginalization on effective project implementation and this is informed by the fact that in Tana River the inadequate involvement of the youth and women and cultural conservatism among others conspire to hinder the success of projects hence ineffectiveness. From previous evaluations a number of projects have been affected by marginalization during their implementation in Tana River. These includes inadequate funds, inadequate capacity of implementing agencies, the implementation of monitoring and evaluation process, inconsistency in doing projects and lack of baseline data. These factors have led to stagnation of projects, existence of ghost infrastructures in the rural areas and resource wastage. In most instances when rural projects are being designed programmes involving the marginalized are under represented or underfunded. Sustainability is left out especially in emergency interventions involving Food for Work, Food for Assets and Cash for Work programmes and this leads to undesired outcomes such as incomplete projects. Rapid rural appraisals and end of project evaluations have been conducted in the whole of Tana River County and some lessons have been drawn. These lessons if implemented at policy level will enhance development and address marginalization to promote effectiveness. This study therefore seeks to determine the relationship between Marginalization the independent variable) and effective project implementation (the dependent variable) and measures to be put in place to mitigate ineffective project implementation. The methodology to be adopted will be case study design and the data collection methods which will be used to investigate these factors include face to face interviews, focused group discussions, use of key informants and literature reviews. A critical analysis of the triple roles of women in these communities and the challenge in balancing these roles will be done. This paper will also discuss how cultural conservatism hinders effective project implementation, how to incorporate lessons learnt from the application of various coping strategies that the women from the agro pastoralist and the pastoralist communities have adopted among themselves to overcome the harsh climatic conditions in the semi arid regions and how these coping strategies create a positive impact on drought resilience projects being implemented in Wenje division indirectly. Keywords: Marginalization; Pastoralism; Effectiveness;

O 311

Impacts of Integrated Conservation and Development Projects on Livelihoods: Evidences from Uluguru Mountains Environmental Management and Conservation Project, Tanzania
N. M. Kuboja 1, V. G. Vyamana 2, S. Ngonyani 3
1 2 3

Ministry Of Agriculture Food Security And Cooperatives, Research And Development, Dar Es Salaam, Tanzania Sokoine University Of Agriculture, Forestry Biology, Morogoro, Tanzania Sokoine University Of Agriculture, Development Studies Institute, Morogoro, Tanzania

For the past 3 decades, both Integrated Conservation and Development Projects (ICPs) and Microfinance Institutions (MFIs) have been promulgated worldwide as means of achieving dual objective of conservation and improving livelihoods of communities adjacent to protected areas. Uluguru Mountains Environmental Management and Conservation Project (UMEMCP) is a typical ICDPs integrating Village Savings and Loan (VS & L), implemented in Tanzania between 2004 and 2010. This paper presents the results of impact evaluation study conducted in 2009 in four out of nine project participating villages to investigate the impacts of UMEMCP on the livelihoods of different well-being groups within communities. Qualitative data were collected using participatory rural appraisal (PRA) tools administered at village meetings and separately with womens, mens and youth groups as appropriate. Participatory wealth ranking (PWR) was conducted prior to selection of households for the household survey to establish wealth profiles using community-defined criteria and indicators. Quantitative data were collected using a structured questionnaire administered to a total of 242 households, selected using stratified random sampling method from three community-defined wealth categories [poor, less poor and non-poor]. PRA data were analysed thematically with the help of villagers in each community. Validation was performed through triangulation that was censured by judicious use of various PRA tools, which inevitably led to some overlap between the tools. Questionnaire data were analysed using a Statistical Package for Social Science (SPSS) Version 12.0 to provide both descriptive and inferential statistics. UMEMCP was found to improve wealth status of participating community members: proportion of non-poor households slightly increased from 4.1 % before the project to 9.5 % after the project; correspondingly, proportion of the less poor household increased significantly from 18.2 % to 35.5 %; and there was a parallel significant reduction of the proportion of poor households from 77.7 % to 55.0 %. Proportion of household participating engaged in income generating activities (IGAs) increased from 17.6 % to 23.1 % for which chi-square test of independence showed significant association between engagement in IGAs and participation in UMEMCP promoted VS & L. Household incomes generally increased slightly

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

211

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

after UMEMCP for all wealth categories, the rate of increase being highest for the less poor (245% increase) and least for the poor (122% increase). However, institutional obstacles that require upfront contribution of shares prevented the poorest from taking full advantage of the benefits of VS & L and its associated IGAs. Overall, the results suggest that UMEMCP has allowed a few poor to build their pathways out of poverty but still there is a scope for improvement on how to target the poorest. A strategy that provides matching funds to deliver loans to the poorest without necessarily requiring upfront contribution of VS & L shares would make the difference.

S4-14

Keywords: Protected areas management; Livelihoods; Income generating activities;

O 312

Reforming Evaluation to Improve Adaptation In African Agriculture


A. Achonu 1, O. Kinda 2
1 2

Columbia University, New York, USA Universite Cheikh Anta Diop, Dakar, Senegal

Friday, 5 October, 2012

9 : 3 0 1 1 : 0 0

Bio: Kinda Ousseni Kinda studies the issues of sustainable development and is interested in monitoring and evaluation. He has acquired professional experience in monitoring and evaluation and participated in field research. He is a member of African Evaluation Association and the West African representative of youth for Africa Gender and Development Network Evaluators. Audrey Achonu Audrey studies issues relating to the evaluation of economic growth initiatives, with an interest in sustainable development. Her focus and interest in evaluation are vis a vis its ability to address the issues of information asymmetry in development and informing policy decisions. Climate change is a major threat to sustainable growth and development in Africa. The continent is expected to suffer from the anticipated reduction in agricultural output, worsening food security. The impact on agricultural output will vary from country to country, with the IPCC projecting reductions in yield in some countries of as much as 50 per cent by 2020, with small scale farmers most vulnerable (Boko et al., 2007). While there are many causes of such low productivity, those associated with climate change are the most striking. Adaptation initiatives are implemented in many countries to help face the challenges of uncertainty in agricultural yields. However, the outcome of this adaptation depends on the capacity to adapt to change. The adaptation initiatives on which this paper is focusing are the Program of Adaption in Africa, the Programme of Climate Change Adaptation in Africa, and the National Action Programmes for Adaptation. The challenge for the efficient and effective evaluation of adaptation initiatives to climate change is to ensure that the prospective benefits of interventions are being realized and to help improve the design of future interventions. Recent evaluation approaches and tools have not generated credible evidence of solutions to the issue of adaptation of agricultural systems to climate change in Africa. In the evolving field of climate change where the approaches to adaptation are likely to vary overtime, with the evolution of certainty and/or uncertainty about risks, impacts and solutions, it is challenging to frame and design evaluations effectively. Studies (World Bank, 2009, 2010; N. Beaudeliau, 2010; D. N. Barton, 2010; N. Lamhauge et al., 2011) highlight a combination of challenges (ambiguous definition of adaptation, shifting baselines, attribution, and time lags between interventions and outcomes) that affect evaluation frameworks for adaptation, yet recommend that evaluation be an integral part of climate change adaptation. Beyond theoretical considerations and assumptions, substantive issues remain. How should evaluation be conducted in order to meet the required quality standards and its intended role? This paper examines evaluation as a means to strengthen climate change adaptation capacity. Knowing that context matters, the paper will survey the options that could enhance methods and approaches in order to enable managers and actors to take informed decisions and plan strategically. It focuses on a literature review of methods and approaches from evaluations of agricultural adaptation to climate change and highlights the main challenges of dealing with uncertainty. The paper also refers to the three selected adaptation initiatives in Africa to demonstrate how the changing field of climate change affects evaluation, its consistency and its use. Internal and external validity of evaluation, knowledge management from evaluation, required resources and nature of stakeholder participation in this context are also considered in the study. The paper finally addresses the solutions that would improve evaluations in the context of climate change adaptation helping thus to increase agricultural productivity and food security. Keywords: Africa; Agriculture; Climate Change; Adaptation; Evaluation;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

212

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-38 Strand 2

Panel

Evaluating conferences and events: new approaches S2-38 and practice


O 313

Evaluating conferences and events: new approaches and practice


G. ONeil 1, L. Lienart 2, J. Morell 3
1 2

Friday, 5 October, 2012

9 : 3 0 1 1 : 0 0

Owl RE, COMMUNGNY, Switzerland International AIDS Society, Planning Monitoring and Evaluation, Geneva, Switzerland 3 Fulcrum Corporation, Evaluation, Ann Arbor, USA

An estimated 150 billion US dollars is spent every year in organizing conferences and events in the US alone. However, organizers of conferences/events rarely measure their performance and impact on participants and beyond. This roundtable will provide an overview of evaluating conferences and events using two case studies of conference evaluation: the International AIDS Conference and the Lift Technology Conference. The presentations will examine the methods and approaches used for conference evaluation including analysis of monitoring data, focus group interviews, media analysis, follow-up surveys and action plans. Laetitia Lienart will lead the session and present a case study from the International AIDS Conference. Jonathan A. Morell Ph.D. will discuss and present his perspective of conference evaluation. Glenn ONeil will present a case study from the Lift Technology Conference. Keywords: Conference evaluation; Event evaluation; Performance;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

213

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S4-32 Strand 4

Panel

Comprehensive evaluation
S4-32
O 314

Comprehensive Evaluations of International Organizations


R. D. van den Berg, C. Heider, B. Picciotto, D. Poate, I. Naidoo, E. Stern The development effectiveness of multilateral institutions and global/regional collaborative funds and programmes has come under sharper public scrutiny. Shareholders and donors are increasingly commissioning comprehensive evaluations of international organizations and multi-donor initiatives to assist in their decision-making regarding resource allocation and funding commitments. At least ten such evaluations have been undertaken in the last ten years at a total cost of over $ 20 m. These evaluations have generated credible summative judgments about the effectiveness of international institutions as well as formative lessons of experience. However, not all international institutions or multi-agency collaborative initiatives are regularly evaluated and when they are the methods and processes used have not always been as independent, rigorous or consistent as would be desirable. While common principles and methods are available for evaluating development policies, projects and programmes, no such best practices are available for deciding on how organizational mandates should be evaluated or on the scale, scope, management, implementation, oversight and follow-up of comprehensive evaluations. The international community also lacks institutional mechanisms through which the lessons from comprehensive evaluations transcend the institutions concerned and inform the aid architecture. The current coverage of international institutions through these evaluations shows gaps of evaluative evidence. It is inadequate for identifying a solid basis for reform of the UN and the IFIs or for guiding implementation of the framework of the Accra Agenda for Action and the Busan meeting follow-up. Lastly, comprehensive evaluations often take place in organizations or programmes that have no strong and independent internal evaluation functions. This panel will discuss recent work on comprehensive evaluations of international organizations drawing on the deliberations of a group of concerned evaluators who met in Paris in June 2012. The panel will engage with the audience on potential lessons learned and best practices, as well as possible ways forward. Rob van den Berg, Bob Picciotto, Derek Poate and Elliot Stern have been actively involved in several comprehensive evaluations, while Caroline Heider as Director-General of the Independent Evaluation Group at the World Bank has an evaluative overview of whether and how global initiatives are being evaluated. Indran Naidoo, Director of UNDPs Evaluation Office, will provide a UN perspective on the issues raised. The chair will provide an overview of Paris workshop deliberations and sketch a road map for the future of this initiative. The discussion will focus on a series of issues, with short contributions of panel members on each issue and involvement of participants from the audience.

1 1 : 1 5 1 2 : 4 5

Friday, 5 October, 2012

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

214

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S1-28 Strand 1

Panel

Evaluation in Turbulent Time


S1-28
O 031

Evaluation in Turbulent Times


J. Furubo 1, F. L. Leeuw 2, S. Speer 3
1 2 3

Riksrevisionen, Stockholm, Sweden National Research Center for Security and Justice and University Maastricht, Den Haag and Maastricht, Netherlands Independent Evaluator, Wiesbaden, Germany

1 1 : 1 5 1 2 : 4 5

Friday, 5 October, 2012

The political turmoil and economic turbulence of the past five to eight years have unsettled individuals, groups, communities, and even whole countries. The evaluation community can be mis-led by assuming that we are still in a time where incremental approaches to knowledge generation, knowledge transmission, and knowledge utilization hold true. Turbulence undercuts important key assumptions. After general reflections on shifting roles for evaluation, two examples from fields connected to turbulence will be discussed. This session will be chaired by Ray C. Rist, who will give the background and an overview on the topic, as well as show the context and relation to other research not presented here. Jan-Eric Furubo (National Audit Office, S) will analyze the relationship between evaluation and turbulent times. Turbulent times is here not solely understood as the financial and governmental crisis, but reflected upon from a wider perspective in the sense of fundamental changes opposed to stable times. First he discusses the role of evaluation in learning how to handle crisis, second the role of evaluation in learning how to prevent, and mitigate the effects of, catastrophic events. Then, he explains the relationship of turbulent times and policy shifts. However, turbulence can also be caused by other factors than crises. He concludes by further investigating the different role evaluation has in turbulent times. Frans L. Leeuw (National Research Center for Security and Justice and University Maastricht, NL) analyzes the role of evaluation in the area of counter-terrorism. Counter-terrorism, as other policies, has to do with predicting behavior. With reference to Pawson he emphasizes the importance to get engaged in ex ante program theory evaluations. He discusses, and gives examples, regarding how theories underlying counter terrorism can be identified and checked also highlighting the importance of mechanism experiments. Such experiments do not test a policy, but the causal mechanisms on which the policy is based. Instead for constructing and testing a policy it is possible to t e s t the mechanism trough different experiments. Sandra Speer (Independent Evaluator and University Koblenz-Landau, D) describes the role of evaluation in the field of Financial Education Programs. Her point of departure is that a growing number of such programs have been launched in order to mitigate the effects of financial crises or to prepare citizens for it and earlier initiatives have been institutionalized. Sandra Speer points out that many evaluations which have been conducted regarding financial literacy do not actually test the underlying program theory although she reviews and identifies central theories in the field. An important point in this context is the relation between information, choice architecture, regulation and the programs. Keywords: Crisis; Turbulence; Theory-based evaluation;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

215

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S5-13 Strand 5

Paper session

The influence of (New) Public Management Theory S5-13 on Evaluation


O 316

Rationalisation in the Flemish public service organization: crisis as burdon of beacon for more efficiency and effectiveness?
1 1 : 1 5 1 2 : 4 5
D. Verlet 1, G. De Schepper 2
1 2

Friday, 5 October, 2012

Research Centre of the Flemish Government, 1000 Brussels, Belgium Flemish government, Department of Administrative affairs, 1000 Brussels, Belgium

The pursuit of good governance in our society is not new. In the wake of the New Public Management-debate, there was and is a renewed interest in good governance, several concepts and mechanisms from the private sector and market were introduced in the public sector (Hood, 1991; Kettl, 2002). This trend was driven by political and ideological changes, claiming that government should be run like a business. This also led to a renewed attention for wat good governance is about. Good governance was introduced in the Flemish administration as from 2000, following the New Public Management-principles. The Better Administration Policy, started in 2000 and implemented in 2006. The idea was a complete rupture of the past, in terms of organisational structure (including the creation of departments and agencies working together in policy domains) and processes. In spite of an identical structure in each policy domain, an amalgam of more then 70 entities were created, including several specific structures. In 2006, the Flemish Government embarked on an ambitious project: Vlaanderen in Actie (ViA) Flanders in Action. Flanders resolutely directed its attention farther into the future, towards 2020. Flanders must assume its rightful place among the very top regions in Europe, economically, socially and on the ecological plane. At the same time, the OECD publicized a report, comparing the human resources management in different public administrations in Belgium and reflecting a lack of efficiency and effectiveness in general. This caused quite some reactions from Flemish Government, which reacted immediately by taking several likewise unguided missiles measures on the short and medium term. The financial-economical crisis, as from 2008, pushed the Flemish Government into a vast multi-annual programme (MAP) for its public service administration. At the same time, organization wide budget cuts were introduced by the Flemish Government, and less money was to be spend by every administration entity. Despite these measures, no major cuts, reshufflements, mergings of entities, etc were introduced in order to rationalize in terms of work or budget. One of the key-projects within the MAP consists in the rationalisation of the so-called management supporting functions which are nowadays quite important (15 %); the aim is to reduce these kind of functions (inter alia, catering, logistics, accountancy) down to a maximum of 10 % of the totality of posts in the Flemish public service administration, by the end of 2014. The success of this project could be a trigger in order to start future rationalization projects on structural and organizational level. In the paper we focus first on the theoretical background of the several concepts central to this paper and their definition (performance, efficiency, effectiveness, (principles of) good governance). Secondly, we study the possible and actual operationalisation within the context of the Flemish government. In this context the several opportunities and difficulties concerning the definition and measurement of efficiency and effectiveness in the context of good governance will also be discussed. Keywords: Rationalisation; Efficiency; Effectiveness;

O 317

How sure is the foundation of the RGPP in France? A preliminary theory based evaluation on social protection services
V. Ariton 1
1

Romanian Academic Society, EU Funds, Assistance and Development Policies, Bucharest, Romania

The General Public Policy Revision also known as La Rvision Gnrale des Politiques Publiques is a New Public Management (NPM) type of reform that has been initiated in France in 2007. As a complex framework that revises the policy process, it touches all the ministerial layers and has as objectives to cut public expenditures, to modernize the public administration and to increase the quality of the public services. However, the public expenditure in France has increased from 52.7 % of GDP in 2007 at 56.6% in 2010 (Eurostat), the social security founds debt has increased from 53.557 million EURO in 2007 to 175.601 million EURO in 2010 (Eurostat) and the quality of the public services has been addressed mainly by implementing several one-stop offices (guichets uniques). What is the theory underlying the RGPP program in France? What are the assumptions? How do the theoretical solutions respond to the practical problems? Is the actual causal chain correctly specified? The main rationale of this paper is to open the black-box one of the most complex policy reforms in France and investigate its foundations. The paper aims at offering a preliminary answer to the relevance criteria of a future ex-post assessment of RGPP policy in France and is an argument for theory-based evaluations. The research is based on a case study and given the fact that the 41.4% of the public expenditure in France is directed towards social protection (Ministry of Budget and Public Accounts, 2012) the ministry of solidarity and social cohesion is the best candidate for such an investigation. This paper falls within two categories of debate. One is methodological and it has developed inside the evaluation expert community. The other one is theoretical and has developed insides NPM school of thought. The first one draws on the theory-based evaluation and its importance (Weiss, 1995; Stame, 2004) and the other ones builds on the debate regarding the impact of the NPM reforms in Europe (Politt 1995; Walle&Hammerchmid 2011; Politt and Dan 2011).
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

216

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S5-13

In the first part, the paper will look on the main assumptions of the RGPP, the link between the stated problems and the main policy actions. The second part of the paper will investigate both categories of debates. The one that deals with theory-based evaluations and its importance and the one that concerns the impact of NPM reforms in Europe. For the later, the paper will draw on the findings of the EU 7th framework project Coordination and Cohesion in the Public Sector of the Future, which deals with the impact of NPM reforms in 10 European countries. The third part of the paper will present an in-depth assessment based on the case study. This part will mainly draw on the interviews rounds. The investigation will look at stated assumptions and find out the degree to which they respond to the actual state of affairs. Finally the paper will state its main findings and provide a list of question, which could be relevant for a grater ex-post assessment of the RGPP in France. Keywords: New public management; Theory-based evaluation; France;

O 318

1 1 : 1 5 1 2 : 4 5

Friday, 5 October, 2012

Towards contextual monitoring and evaluation of empowerment in a development NGO


T. Kontinen 1, T. Jrvinen 2
1 2

University of Jyvskyl, Department of social sciences and philosophy, Jyvskyl, Finland World Vision Finland, Helsinki, Finland

Non-governmental organizations (NGO) engaged in development co-operation have encountered increasing demands for showing outcomes and impacts of their work. These pressures have been posed by the donors striving for results- and evidence-based management. Additionally, the organizations themselves have uttered a growing inspiration for learning about what works in their development efforts. The managerial pressure, however, has emphasized measurable indicators as central tools for monitoring and evaluation of NGO work. This, in turn, has led to a growing body of critical observations on tensions between the existing tools and the actual objectives of organizations. Empowerment, for example, is an outcome and impact aspired by a number of development NGOs, but the methods and tools in order to capture the evidence of empowerment have been experienced insufficient. In order to address this challenge, some NGOs have started to develop monitoring and evaluation methods, sometimes with collaboration with researchers. In this paper we will describe the first steps of such an effort undertaken in the World Vision Finland starting on January 2012. Rooted in the approach of realistic evaluation, the development of new methods in the organization started with identification of Finnish, Kenya and Ugandan practitioners programme theories and related conceptions of mechanisms of empowerment. In this paper we report the results of the first phase of the research project aiming at developing a monitoring and evaluation method of empowerment contextualized in the organizational and societal contexts in which the particular organization and its partners work. Kontinen (PhD) is a University Lecturer in social sciences specialised in civil society and NGOs in development and Jrvinen (PhD) is a programme director of World Vision Finland who has written his PhD dissertation on empowerment in development NGOs. Keywords: Empowerment; NGOs; Development co-operation; Realistic evaluation;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

217

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S3-09 Strand 3

Paper session

Capacity Development: Learning from experience III


S3-09
O 347

Evaluation capacity development in networked societies


K. von der Mosel 1, S. Krapp 2
1 2

Federal Ministry for Economic Cooperation and Development (BMZ), Germany Deutsche Gesellschaft fr Internationale Zusammenarbeit (GIZ), Germany

1 1 : 1 5 1 2 : 4 5

Friday, 5 October, 2012

Results-based management, an evaluation culture and evidence based policy making are largely absent in most partner countries. To support countries that are interested in strengthening their evaluation capacities and that are open for the discussion of critical evaluation results and their consequences for reforms of political practices the Ministry of Economic Cooperation and Development (BMZ) of Germany has established a Fund to support Evaluation Capacity Development (ECD) programs. A first program is being implemented in Costa Rica and another one is about to start in Uganda. Both programs are demand driven as both countries have explicitly asked for ECD support. The presentation will introduce the two programs, their objectives, approaches and activities, and it will discuss first lessons and results. Costa Rica has a favorable political framework, an understanding of the relevance of evaluation for the well-being of the country and its people, and it has successfully anchored institutional monitoring and evaluation mechanisms since the 90s. Other Central American countries have as well demonstrated interest in strengthening their M&E capacities and systems, but the political and institutional frameworks and conditions have been less favorable than in Costa Rica as yet. However, it is envisaged that results and lessons of the Costa Rica ECD Program will benefit other countries in the region, also beyond Central America. GIZ (Deutsche Gesellschaft fr Internationale Zusammenarbeit) is implementing both, the Costa Rica and the Uganda programs in cooperation with country partners on behalf of BMZ. Components and related activities of the Costa Rica ECD Program are as follows: Human Resources Development: Strengthening of educational and training structures Development of individuals competencies Organizational development: Strengthening of institutional evaluation structures and functions Supporting of an evaluation and learning culture within organizations Cooperation and network development: Establishment of a regional inter-institutional evaluation platform Strengthening of existing evaluation associations and networks Systems development in the policy field: Strengthening of the role of evaluation in the national M&E system Strengthening of the role of evaluations in policy making Provision of support to the development and implementation of evaluation policies and legislation Uganda has a clear commitment for pro-poor-growth and sustainable development as well as a strong orientation towards results-based management and evidence based policy making. It has therefore asked for German ECD support and will be the second partner country of the BMZ ECD Fund. Realizing that only about 10 percent of public investments are being evaluated and that the majority of those evaluations have been commissioned and managed by development partners, the Ugandan Government has established an Evaluation Facility to commission and conduct evaluations. In support of this Facility the ECD Program aims to strengthen the evaluation skills of Ugandan civil servants and to increase the number of qualified evaluation specialists in the East African region. Activities include the establishment and rolling out of an internationally approved evaluation course which reaches out to people across Sub-Saharan Africa and that focuses on blended-learning (combining onsite and online capacity development initiatives), as well as the provision of support in the conduct of evaluations. Presenters bios: Katrin von der Mosel Masters degree in Nutrition and Nutrition Economics and Diploma in Educational Theory, University of Bonn, Germany; over 20 years of experience in international cooperation with the United Nations (14 years), the private sector (five years) and bilateral government (two years) in leadership, advisory and management positions. Positions included: Senior Evaluation Officer BMZ (20102012), Chief United Nations Volunteers Evaluation Unit (20072009), Evaluation Officer World Food Programme (20052006), Performance Measurement Analyst WFP (20032004), Programme Adviser Strategy & Policy Division WFP (20012002), Head Planning and Programming Section WFP Bangladesh (20002001), Head Vulnerable Groups Development Programme Section WFP Bangladesh (19981999), Programme Officer WFP China (19961997) and Expert Agriculture and Food Consultants International (19911995). Areas of expertise include evaluation, monitoring,results-based management, project design and knowledge management. Technical areas of competence include conflict prevention and peace building, humanitarian assistance, food security, health, education, capacity development, volunteerism and gender. Dr. Stefanie Krapp Sociologist; employments as Assistant Researcher at the Department of Sociology at the University of Koblenz-Landau, as free-lance consultant for German development projects mainly in Egypt and South East Asia developing and implementing M&E-Systems and carrying out impact evaluations, and as Assistant Researcher at the Center for Evaluation at Saarland University focused on the evaluation of projects in the fields of education, vocational education and international cooperation and on developing and conducting trainings in evaluation; here she also received her PhD in Sociology; for one and a half years she advised the German Development Service in Labour Market and Vocational
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

218

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

Education Research in Laos (200607); after that she was an Integrated Expert at the University of Costa Rica in M&E for CIM-gtz, a German development organization (20082010); since April 2010 she is a Senior Evaluation Officer at giz head quarter in Germany. Keywords: Evaluation Capacity Development; M&E capacities; Central America; Evidence-based policy; Professionalization of evaluation; Development cooperation;

S3-09

O 348

How to strengthen the policy use of evaluation in the public sector development organizations and programs in Pakistan
N. A. Khan 1
1

Renova Development Evaluation Consulting, Evaluation, Islamabad Pakistan, Pakistan

1 1 : 1 5 1 2 : 4 5

Friday, 5 October, 2012

The presenter is a development evaluation practitioner, bringing along more than 24 years of varied management and evaluation experience. Over the years He has worked as M&E Specialist with United Nations System, Planning Commission of Pakistan, ADB, DFiD, SDC, IUCN, DHV and NGOs. His recent work includes program evaluations of UNWOMEN-Kenya, UNDP and UNODC-Pakistan, UNDP-Kosovo, ADB Pakistan and NGOs.This paper will focus on ways to deal with the prevailing weaknesses in the evaluation of public sector development programs, and will tend to suggest measures to further strengthen and improve the policy use of evaluations. Experience shows that the practice of impact evaluation and its use in policy making in the public sector organizations and programs in Pakistan always remained weaker and is faced with a number of issues and challenges. There is a strong need to inform policy formulation process by learning from impacts of past endeavors. The evaluation agenda need to be perused highly, holistically and independently, by institutionalizing the evaluation function at the policy level. There is need to endorse the evaluation mandate and devise comprehensive framework at the highest (national) level to guide and consolidate impact evaluation to inform policies. There is also a strong need to design development programs with detailed evaluation frameworks, consisting of indicators, baselines, targets, data gathering and analysis processes and reporting and dissemination mechanisms. Practice suggests that there has been a considerable scarcity of professional capacities and expertise in impact evaluation in the public sector. These capacities need to be built at the policy and program level to recognize and use evaluations for program improvement and policy formulation. Regular feedback channels need to be established to timely inform relevant policies. Furthermore there is also a need for mandatory budgetary allocations for the evaluation function at all policy and program levels. Evaluations can provide a bridge between the beneficiaries and the policy makers. Participation of stakeholders need to be made mandatory and ensured by employing participatory methodologies and building capacities. Currently public sector programs in Pakistan are using conventional M&E procedures, mostly limited to monitoring of the progress. There is an ever greater demand for endorsing comprehensive mandates and state of the art mechanisms for monitoring and especially evaluation of public sector programs and policies. M&E frameworks for policies and programs need to be devised outlining specific and measurable indicators for outcomes and impacts and establishing baselines and targets. The ongoing data gathering mechanisms at the national level and at the program level need to be further strengthened by including participatory and qualitative data collection and processing tools. To ensure that the evaluations are used as a policy tool, high level coordination is required to make sure that relevant feedback from the impacts of program finds its way to policy makers. The establishment of a national level evaluation organization may help in bridging the gap between policy makers and implementers and will open communication channels for the feedback and its use in policy making. Keywords: Institutionalization; Frameworks; Capacities; Participation; Resources;

O 329

Urban Planning in France: what role for evaluation?


A. Guitard 1
1

Universite Paris-Est-MLV, Marne la Vallee, France

Bio: Ms Guitard is completing a PhD in Urban Planning at the Universit Paris Est. Her thesis addresses the theory and practise of evaluation in the field of urban planning policy and implementation in France. The empirical research focuses on the way key actors in local governments and communities implement and use the processes and results of evaluation in urban planning initiatives. Rationale: This paper will present the state of Ms Guitards research on evaluation applied to the urban planning project in France with a view to: first, show what are the specific difficulties encountered by local authorities seeking to develop evaluation processes within that type of public action on their territory, and second, focus on patterns of evaluation identified in the field of urban planning operations, to see how they can influence the realization of urban planning projects. Narrative: Today, local actors involved in urban planning operations are facing new challenges, since their political, social, economic and cultural fabrics are embedded in the two major trends of decentralization and globalization. Decentralization, beginning in France in the early 1980s has given more legitimacy and access to local actors. This access has created a demand from the bottom up for more accountability, creating opportunities and expectations for evaluation practices that are still not well understood by actors in the local governance universe particularly when confronted with top down demands for evaluation. Globalization has also introduced new players such as international or supranational organizations and associations, global firms and multi-national companies, opening up access for actors from the top who are asking as well for accountability as soon as a project is calling for their financial contribution.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

219

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

The multiplicity of actors with different interests and demands poses challenges for those who are in charge of the development of urban planning operations particularly when faced with shrinking budgets and the need to integrate evolving norms and standards. In this context, those in charge of urban planning operations are claiming they need to change their practices and evaluation is sometimes put forward as the silver bullet. However despite that those in charge of urban planning operations are talking a lot about evaluation, it is still difficult to find examples of evaluation in practise.

S3-09

This paper attempts to explain why, despite its prominence in official discourse, the practise of evaluation remains weak in urban planning in France. It considers three categories of variables: operational, cultural and political, and draws parallels between development trajectories of public programs generally and the implementation of urban planning projects in France. Finally the paper will provide a picture of which actors, in which territories in France have used evaluation to develop their urban planning projects, attempt to identify patterns of practices and see if the use of evaluation influenced the way in which the project was carried out. Keywords: Urban planning; Local government; Decentralisation; Evaluation use; Multiple stakeholders;

1 1 : 1 5 1 2 : 4 5

Friday, 5 October, 2012

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

220

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S3-21 Strand 3

Panel

Reframing the debate: what is ethical practice S3-21 in international development evaluation?
O 320

What is ethical practice in international development evaluation?Challenges and possibilities for re-framing the debate
1 1 : 1 5 1 2 : 4 5
C. Duggan 1, S. Zaveri 2, K. Hay 3
1 2

Friday, 5 October, 2012

International Development Research Centre, Evaluation Unit, Ottawa Ontario, Canada Independent evaluator, Mumbai, India 3 International Development Research Centre, New Delhi, India

Over the last decade, the profession of evaluation has seen the development of standards and guidelines to help evaluators and evaluation commissioners to navigate ethical issues in evaluation practice. Operationalization of these principles, however, has been uneven and from the perspective of many, unsatisfactory. A growing number of evaluators are calling for greater cultural competence in evaluation, noting that in an increasingly networked world society, there is a need to re-examine existing frameworks for ethics in evaluation; this is particularly true for evaluators of international development programs. The central question of this panel is: What are the challenges, prospects and strategies for strengthening ethical practice in international development evaluation?The panel consists of three papers in varying contexts and which touch on a number of cross cutting themes including the protection of vulnerable groups, the incorporation of rights based and equity-oriented thinking, and the responsibilities of commissioners and evaluators to do no harm. The paper by Sonal Zaveri argues that a human rights paradigm is important for an emancipatory and transformative evaluation approach, and that an ethical questioning of implementation strategies is needed. Drawing from experiences of evaluation in South Asia involving sex workers, migrants and children stakeholders, it poses the question what should take precedence: evaluation for results or evaluation that is human rights focused and sensitive to gender and ethical considerations? In her paper, Katherine Hay argues that the framing and practice of ethics in evaluation inadequately reflects changes in development thinking integrating equity, rights based, and feminist principles. Her paper draws on efforts to evaluate gender and social equity oriented programs in South Asia, many of which are led by evaluators with a rights or social justice standpoint. Looking across examples from South Asia, Hay develops an idea she calls deep ethics, that attempts to weave insights from new thinking on the idea of development with new insights from efforts to evaluate that idea of development on the ground. Drawing from research emerging from a multi-country evaluation research project that examines evaluation practice in conflict-affecting settings, Colleens Duggans paper delves into some of the ethical and political challenges frequently faced by evaluators, evaluation commissioners and evaluation stakeholders in these highly complex and often fluid environments. Anchoring her argument in the principles of nonmaleficence and beneficence, she examines the limitations of existing frameworks and guidelines. Using examples from case studies in Northern Ireland, Rwanda, South Africa, and South Asia she argues that the extreme contexts in which evaluation is embedded call for new rules of engagement and sketches out the parameters that should guide ethical evaluation practice in conflict affected settings. Keywords: Equity-based evaluation; Conflict evaluation; Human Rights Evaluation;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

221

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-30 Strand 2

Paper session

The use (and abuse) of evaluation


S2-30
O 321

The challenge of participatory methods for political accountability


M. Walton 1
1

Massey University, School of Health and Social Services, Wellington, New Zealand

1 1 : 1 5 1 2 : 4 5

Friday, 5 October, 2012

Mat Walton Mat is a lecturer in the School of Health and Social Services, Massey University, New Zealand. His research focuses on the application of complexity theory to public health policy analysis and evaluation methods. Mat is currently undertaking a three year project exploring methods for evaluation of wicked policy problems. Objective: This presentation will consider the implications of participatory evaluation methods in a networked society for the role of traditional political accountability frameworks. Complex policy problems, such as improving population level nutrition, are increasingly being met with policies that devolve strategy development and funding decisions to local organisations. These devolved approaches are consistent with ideas of network governance. The result may be programmes at the local level with objectives and actions that differ across areas. This difference creates challenges for evaluation when developing a national or regional position on what is working, how and why, to inform policy. Increasingly, evaluation methods in such devolved policy areas are utilising participatory methods within case-comparison designs. However, such approaches reduce the control of national or regional policymakers to define evaluation questions, manage result dissemination and develop policy to suit politics at this more aggregated level. By understanding the political tensions that networked policy and evaluation designs can produce, evaluators may be able to negotiate a path through such tensions. This presentation draws upon a wider research project exploring the use of complexity theory for evaluation of wicked policy problems. Information sources include literature, evaluation case studies and interviews with evaluation practitioners, commissioners and users. Justification: Understanding the political context within which evaluation is undertaken is relevant to all evaluators. The context will impact on support for, and understanding of, emerging networked evaluation methods by decision makers and evaluation commissioners. The presentation should be of interest to a wide range of conference participants, including evaluation practitioners, commissioners and users. Keywords: Complexity theory; Participatory methods; Politics;

O 322

Peace-precarious accountability and transparent evaluation: risks and responsibilities


C. Elkins 1
1

Belling the Cat LLC, Hillsborough NC, USA

While much of the globe races toward a networked information environment, significant populations remain relatively disadvantaged. Geographic, linguistic, cultural, or political challenges can restict these groups access to relevant knowledge, capacity to process and adapt lessons learned, and scope for progress. Evaluation in these contexts confronts several compound barriers to operationalizing critical values (accuracy, completeness, participation, relevance, use) inherent in the process. For example: trust disparities have exaggerated effects where participants perceive personal risk, affecting accuracy and completeness subgroups with partial access can develop real or perceived elite status, distorting completeness, participation, and relevance physical security concerns for evaluators, and associated costs, limit the direct or facilitated feedback of findings to local stakeholders, compromising relevance and use There is no simple methodological fix for the constellation of challenges in this context to valid and ethical evaluation outcomes. Yet the professional evaluation community has, accordingly, an even greater responsibility to develop adaptive approaches that protect and support the interests of not only linked in groups but also those on the fringes or out of bounds. This paper analyzes institutions and risk as they affect the practice and utilization of evaluation in such situations, in order to better understand the ways we need to account for information asymmetries and divergent group dynamics. Using a flexible model and cases formalized from experience in Africa, the Middle East, and Asia, we explore professional standards and field practices that can help individual evaluators and teams cope with the challenges and find opportunities to recalibrate incentives toward more balanced and constructive evaluation outcomes. A key role for heightened transparency to promote rapport and extend communication options helps mitigate potential damage through leveraging available networks. Keywords: Evaluation methods; Field practice; International development; Risk analysis;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

222

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 319

Pupil surveys an easy way to school developments, of a bid for more competition between schools?
L. Monsen 1
1

Lillehammer University College, Dep. of Pedagogy and Social work, Lillehammer, Norway

S2-30

1 1 : 1 5 1 2 : 4 5

Friday, 5 October, 2012

Pupil surveys have been used for more than hundred years as a contribution to school development. Dewey emphasized the importance of listening to pupils in his classic School and society published in 1900. Looking into the school development literature for the last 50 years pupil surveys have had a prominent place in documenting the developmental needs different schools had to deal with. In the last 20 years we have seen new developments in the use of pupil surveys. Now the emphasis is on pupil surveys as a method to differentiate between schools from excellent to bad, from good learning environments to bad learning environments. In the paper I will discuss my experience from using a national pupil survey to analyze data from one county in Norway. When I found rather big differences between upper secondary schools in this county, I had to give some credible answers to a simple question: Why this difference? The usual answers you find in research form Norway and other countries using this kind of surveys, is something like this: The important variable and variance depend on school culture. In my earlier reports and articles I have made the same conclusion. After looking at my data from other points of view, I have some more complicated answers, pointing to among other things; the method used, the limitations of the survey format and culture as a metaphor for many different theoretical foundations for analyzing the data. Even if I will not come up with a very firm conclusion, I hope to invite to a more broadminded discussion on how to use pupil surveys. Keywords: Pupil Survey; School development; Competition between schools; Research with surveys;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

223

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S5-24 Strand 5

Panel

Environmental evaluation in the EU: a simple idea S5-24 and a hard practice in a complex context
O 325

Environmental evaluation in the EU: a simple idea and a hard practice in a complex context
1 1 : 1 5 1 2 : 4 5
P. Mickwitz 1, L. Eriksson 2, L. De Smet 3, M. Keene4
1 2

Friday, 5 October, 2012

Finnish Environment Institute, Helsinki, Finland Swedisch Environmental Protection Agency, Evaluation section, Stockholm, Sweden 3 Research Institute for Work and Society (HIVA KU Leuven), Leuven, Belgium 4 US Environmental Protection Agency, Washington, USA

Bios: Per Mickwitz: Research director at the Finnish Environment Institute (SYKE) and professor environmental policy at the University of Tampere Lisa Eriksson: Head of the evaluation section at the Swedish Environmental Protection Agency (SEPA) Lieven De Smet: Research manager environmental policy at the Research Institute for Work and Society (HIVA KU Leuven, Belgium) Matthew Keene: social scientist at the U.S. Environmental Protection Agency and coordinator of the US Environmental Evaluators Network All contributors to this session are heavily involved with the European Environmental Evaluators Network (EEEN), which saw the light in 2011. A first EEEN forum has been organised by HIVA KU Leuven in February 2012. The 2013 EEEN forum will be organised in Stockholm and hosted by the Swedish Environmental Protection Agency. Content: In order to develop new environmental policies, it is important to evaluate those that have already been adopted. However, this simple idea is difficult to apply especially in the complex governance context of the EU. Attributing impacts to specific policies is challenging, and making evaluations useful for political decision-making is even more demanding. Recently, emphasis on retrospective evaluation has increased in the EU. Nonetheless more work is needed to create a well established culture of policy evaluation in the EU. Environmental policy evaluation in the EU is still quite unsystematic, mostly ad hoc and lacking methodological rigour as well as relevance with respect to their actual use by national and local authorities, NGOs and interest groups. So far, neither the scale nor the quality of the EU evaluation efforts have been sufficient compared to the difficulties of the task. What is actually necessary is to combine and align comprehensive EU analyses with detailed local, regional or national case studies. The most crucial thing is to build a stronger community of European environmental evaluators. This is exactly what the EEEN wants to do: providing a forum where actors commissioning, using and producing environmental evaluations can come together and share insights and experiences. Organisation of the session: A 20 minute presentation by Per Mickwitz (chair of the session) Environmental evaluation in the EU: a simple idea and a hard practice in a complex context. This presentation will demonstrate the need to establish an environmental policy evaluation culture within the EU where efforts by actors at different levels are coordinated. A 10 minute intervention by discussant Lisa Eriksson This intervention will complement the case for collaboration and mutual learning by addressing the issue of scale using national, regional and/or local perspectives from Sweden, Finland and Belgium. A 5 minute intervention by discussant Lieven De Smet This intervention will map how the EEEN aims to build a stronger community of European environmental evaluators via an annual forum, the formation of working groups around certain issues, an online networking and exchange platform, etc. A 55 minute plenary discussion between the panel and the audience. Matt Keene is part of the panel and can share practical experiences from the US Environmental Evaluators Network. Keywords: Environmental evaluation; Complexity; Scale; European Environmental Evaluators Network; Retrospective analysis;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

224

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S4-22 Strand 4

Paper session

Evaluation of local, regional and cross border S4-22 programs II


O 326

Evaluating complex multi-strand programmes: lessons from the EU labour and social policy field
1 1 : 1 5 1 2 : 4 5
A. Cancedda 1, M. Canoy 1, J. Dodd 2, V. Donlevy 3, P. Jeffrey4, M. Peters 5, E. Van Nuland 5
1 2

Friday, 5 October, 2012

ECORYS ECORYS 3 ECORYS 4 ECORYS 5 ECORYS

Nederland, Rotterdam, Netherlands UK, Leeds, United Kingdom UK, London, United Kingdom UK, Birmingham, United Kingdom NL, Rotterdam, Netherlands

With the tendency to rationalise and unify financing instruments there is an increasing demand by EU institutions for large-scale assessments of the outcomes of programmes that consist of multiple policy strands and very different types of activities. These programmes also yield effects at different levels (EU, national, local) although the focus is most often on the EU dimension. Applying a rigorous methodological approach to such programmes is often a challenge and there is the need to make the most of a wide range of information sources. Consultation of stakeholders is often key, not only to elicit judgements and opinions but also to achieve information that can help filling the gaps. At the same time facts and data available from supposedly objective sources may be equally or more subjective as they reflect for instance reporting strategies of beneficiaries. Quantitative data on outputs are less hard evidence than usually thought because their interpretation is not immediately evident. In this context the distinction between objective and subjective, hard and soft evidence tends to blur. A meaningful picture can only be achieved through a careful triangulation and a sound interpretation of information of different type. The overall conceptual and analytical framework in the end becomes more important than the individual techniques through which information is collected. The paper will discuss these and other issues in the light of recent evaluation experiences of the authors with EU programmes in the labour and social policy domain. The purpose is to contribute to the recent debate promoted by the EES on the current evidence-based evaluation wave. Keywords: Social; Programme evaluation; EU;

O 327

Evaluation of Framework Loan Instrument of the European Investment Bank


B. de Laat 1
1

European Investment Bank, Operations Evaluation, Luxembourg, Luxembourg

The paper will present the results of the evaluation of the Framework Loan instrument of the European Investment Bank (EIB). The EIB is the policy bank of the European Union (EU), the mission of which it is to support EU policies (www.eib.org). In lending funds to borrowers, and excluding other activities such as technical assistance or equity finance, three main instruments are at the EIBs disposal: investment loans, global loans and framework loans. Investment loans are used for, often substantial, investment projects, the contours and objectives of which are clearly defined ex ante and of which progress and outcomes can be assessed against those e.g., infrastructures such as public transport or utilities. Global loans allow the EIB to finance large quantities of small projects that the EIB would be unable to appraise and monitor by itself, by passing through financial intermediaries. Typically, small SME investments are generally financed through this instrument, using banks or leasing companies. Finally, framework loans are used to finance large quantities of smaller or bigger projects, the precise list and details of which at the start of the investment programme have not yet been fully defined. This is typically the case for schemes that are financed within long term investment plans of, e.g., public authorities, of which the broad contours are sketched, but which are only gradually realised. Whereas investment loans and global loans have been around for a long time, the framework loan instrument was introduced in the late 90s to be fully fleshed out in 2005 only. It appears to be used mainly to finance regional or municipal authorities investment plans, often, but not only, in conjunction with EU structural fund funding. Today more than 10% of EIBs annual lending activity is provided by using this instrument. The paper discusses the results of a recent evaluation of 24 different framework loans across the EU27. This covered both structural funds co-financing and loans that do not. It also covers multi-sector and single sector (e.g. social housing) loans. The evaluation assesses the performance and appropriateness of the framework loan instrument. The paper addresses both the substance of the projects financed under the loan, and the effectiveness and efficiency of using this specific instrument, in comparison to other instruments that are at EIBs disposal. The paper will furthermore address the methodological complexity of evaluating such loans, as they are often multi-layer (i.e., intermediated, sometimes twice), multi-sector and multi-financier, i.e. they often depend on a variety of co-financing sources, such as private finance and, as mentioned, EU funds. Most importantly however, whereas global and intermediate objectives are generally set at the start of investment plans, the specificity of framework loans as opposed to many other types of interventions is that operational objectives are not clearly spelled out, which poses a specific challenge to evaluation. Keywords: Framework loan; Structural funds; Finance; Infrastructure; Local authorities;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

225

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S2-23 Strand 2

Paper session

New or improved evaluation approaches II


S2-23
O 330

The contribution of ethnography to evaluation: a summative approach for assessing participatory healthcare innovation in two clinical settings in the UK
S. Vougioukalou 1, A. Boaz 1
1

1 1 : 1 5 1 2 : 4 5

Kings College London, Primary Care and Public Health Science, London, United Kingdom

Friday, 5 October, 2012

In the networked society where medical information and patient fora are increasingly becoming readily available online, the potential for patients to become increasingly empowered and knowledge-enfranchised social actors is amplified. Recent health and social care policy initiatives that aim to reduce health inequalities and strive towards a more inclusive democratic society recognise the need for participatory methodologies, as well as evidence-based practice. While ethnography and evaluation have been developed as two distinct fields, a theory and method for ethnographic evaluation was defined over 25 years ago (Dorr-Bremme 1985). Despite this, in the context of health care, ethnographic evaluation has mainly been implemented in developing countries where ethnography is more commonly treated as a supplement to programme evaluation. This trend can be attributed to the contradictory epistemologies that have shaped ethnography and evaluation namely phenomenology, participant observation, emic phenomenology and empiricism, input-output-outcome models, and etic objectivity on the other hand. As emerging healthcare service improvement paradigms move away from top down interventions with limited patient and public participation, evaluation models should follow suit. In this study, an ethnographic evaluation was used to assess the impact of acceleration in a participatory healthcare intervention. Using Experience-based Co-design methodology patients and staff worked together to develop improved services in two intensive care and lung cancer pathways in the UK. The mixed- methods evaluation design comprised intervention event observations, work-based observations, end- of-session evaluation forms, and formal and informal stakeholder interviews at participating hospitals and universities. Blending ethnography and evaluation led to the creation a new adaptable and responsive methodological toolkit. This has captured the diversity and complexity of perceptions of healthcare delivery among a range of participants such as terminally-ill patients, survivors of rare medical conditions, hospital managers, university professors, researchers and policy advisers. This paper will address the complexities of ethnographic evaluation and demonstrate its suitability for reflecting the plurality of voices involved in participatory healthcare sector research. It will discuss how the changing role and agency of patients can be reconceptualised and re-adressed within healthcare evaluation. This study is funded by the Service Delivery and Organisation Programme of the National Institute for Health Research, reference 10/1009/14. Keywords: Healthcare innovation; Ethnographic evaluation; Patient and public involvement; Participatory methodologies;

O 331

From ongoing evaluation to learning evaluation and continuing research: Conceptions of an evaluation approach in a Swedish translation process
K. Nordesj 1
1

Linnaeus University, School of Social Work, Vxj, Sweden

The aim of this paper is to explore and define the contents and steering characteristics of the Swedish evaluation approaches learning evaluation and continuing research (fljeforskning) which are the results of the EU- endorsed approach ongoing evaluation within the structural funds programming period of 20072013. The latter was formulated with the purpose to deliver more useful and well-timed evaluation results during the programming period enabling a higher degree of usability and if necessary the ability to steer and change the direction of the programs. In Sweden, ongoing evaluation was implemented as learning evaluation or continuing research depending on whether the evaluation takes place within the regional or the social fund. In this paper they are seen as synonyms, and in Sweden they are used on a project level as well as on a program level. Courses in the evaluation approach are regularly arranged at universities to educate evaluators who work on a project level within the structural funds in which learning evaluation and continuing research is mandatory. A translation framework is used to explore the course participants understandings and conceptions of the evaluation approach, presuming that the content of the approach is being transformed by different actors from a European level, via a Swedish governmental level, before finally arriving at a university level where evaluators make an interpretation prior to their evaluation practice. The empirical material is a survey to all course participants of the university courses in learning evaluation and continuing research 20082010. 48 evaluators from a population of 131 participants have answered questions about the content of the approach. The evaluators divide themselves in three different groups of understandings of the approaches, groups that mirror different practices and steering characteristics. Supplementing the survey are interviews with course administrations on six Swedish universities which give a context to the courses and draw attention to the course management as a translating process in itself. The paper concludes that there exist heterogeneous conceptions of the content of learning evaluation and continuing research concerning the project level, which could result in an equally heterogeneous evaluation practice. Different conceptions pose questions regarding the original idea and steering concept of ongoing evaluation, and if it has been preserved, transformed or vanished during the translation process. Keywords: EU; Ongoing evaluation; Translation; Sweden;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

226

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

O 332

Evaluation practice in Latin America: the case of the Sistematizacin approach


P. Rodriguez-Bilella 1, E. Tapella 1
1

ReLAC (Latin America Evaluation Network) and IOCE / CONICET, San Juan, Argentina

S2-23

1 1 : 1 5 1 2 : 4 5

Friday, 5 October, 2012

Evaluation practice in Latin America is relatively unknown in the North. This presentation contributes to building more comprehensive knowledge and documentation of development evaluation in the global south. Systematizacin has been identified as a participatory and multi-perspective methodological approach that emerged in the 1970s in Latin America. While much of what is currently used in the global south might be adoption or adaptation of methods from the global north, sistematizacin is proposed as an innovation in evaluation theory and methodology. Akin to Systemic Approaches in Evaluation, it can be understood as a process of reflecting on the experience of a project or programme in order to learn from it. Through sistematizacion, practitioners and evaluators critically reflect on and make sense of an experience, turning the lessons derived from that reflection into new and explicit knowledge, which can inform a new round of practice, and also be communicated to others. The presentation will explore the potential of systematizacin to assess progress, outcomes and impact when operating in substantially complex situations but which also have simple and complicated dimensions. More concretely, it will weave together a methodological introduction to systematization as a form of knowledge production, set systematizacin within the field of systemic approaches in evaluation, and a case study will be presented in order to show systematization in action. Keywords: Case study; Sistematizacin; Complexity; Latin America; Systemic Approaches;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

227

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S4-02 Strand 4

Paper session

Ex-ante evaluation through cost benefit and systems S4-02 analysis


O 334

Mission impossible? Applying cost-benefit analysis in the social policy: the case of potential introduction of EU paternity leave measures
1 1 : 1 5 1 2 : 4 5
I. Pavlovaite 1, C. Juravle 2, T. Weber 1
1 2

Friday, 5 October, 2012

GHK, Birmingham, United Kingdom GHK, London, United Kingdom

In the world of policy making, there is an increasing need and desire to quantify the anticipated positive and negative impacts from proposed legislative or policy initiatives. Whilst established quantification methodologies exist in areas such as transport or regional development, this is less true in the field of social policy. The challenges to quantify the costs and benefits of social legislation or policy are of course considerable not least because they require establishing a monetary value of social benefits. The paper deals with this methodological challenge by examining the example of applying the cost-benefit analysis in the area of introducing potential paternity leave measures at the EU level. The paper presents the application of the cost-benefit analysis to estimate the socio-economic and financial costs arising from the possible introduction of paternity leave in 27 EU Member States. The paper discusses the four key steps in the analysis applied: Definition of the baseline position for all Member States and elaboration of potential policy scenarios. Quantification of the population willing and eligible to take leave which defines the demand for paternity leave. Quantification of the costs and benefits for the eligible population taking possible paternity leave. The following impacts are discussed: earnings-related impacts (e.g., compensation), administrative costs, replacement of absent staff and production impacts. A distinction is made between socio-economic costs (e.g., foregone production) and financial results (e.g., compensation payments, fathers foregone earnings). The quantitative analysis assessed each of the impacts on the three main categories of stakeholders included in the model: public authorities, employers and employees. Additional socio-economic impacts of paternity leave and other stakeholder categories have been assessed primarily in a qualitative manner based on evidence presented in the academic and policy literature. The policy rationale behind the possible introduction of paternity leave measures has been analysed to provide the basis for articulating the expected benefit. Such expected benefits were then linked to anticipated socioeconomic effects. The discounting of cost over time to take account of the fact that many impacts occur at different points in time. The net present value (NPV) of each option by Member State was calculated. The research process and outcomes have shown that while there are undoubtedly various benefits to paternity leave, these benefits are largely difficult to quantify. When assessed in a quantitative way, valuations of these benefits are often highly subjective. A qualitative assessment based on the available literature was therefore provided for these impacts, including where possible a scaling of the impacts for consideration in the CBA. The paper then draws on the findings that the manifestation of costs and benefits would strongly depend on the baseline situation in each country with regard to the gender equality indicators, current family leave provisions, the generosity of compensation provided during leave and the wider work-life balance framework. The paper concludes by drawing the methodological challenges encountered in the application of cost-benefit analysis to the issue of social policy and suggests some pointers forward for future research. Keywords: Cost benefit analysis; Social policy; Paternity leave;

O 335

Cost benefit analysis of evidence based programs for children and youth
D. S. Bojsen 1, L. M. Christiansen 2, M. Skov 3, S. B. Nielsen4
1 2

Ramboll Management Consulting, Social Management, Copenhagen, Denmark Ministry of Integration and Social Affairs, Copenhagen, Denmark 3 Ramboll Management Consulting, Economics, Copenhagen, Denmark 4 Ramboll Management Consulting, Evaluation Society, Copenhagen, Denmark

In 2011 the Danish Ministry of Social Affairs decided to conduct a cost-benefit analysis (CBA) of three evidence-based programs for children and youth (The Incredible Years, Mulisystemic Therapy (MST), and Multi Treatment Foster Care (MTFC)). The Ministry has been responsible for the demonstration project and implementation of these methods in select Danish municipalities. The Ministry experienced difficulties with the implementation process as many municipalities would not invest in new interventions given the financial situation. The aim of the analysis was to analyse the shared cost and benefits associated with using the evidence-based methods as alternatives to the standard treatment for vulnerable children and youth. A second aim of the analysis was to establish a cost-effectiveness analysis of implementing the programs at a municipal. A number of municipalities were involved in the analysis in order to analyse the direct impact for the municipalities. To establish a baseline an extensive data analysis of all former recipients of support for vulnerable children and youth was conducted. The analysis was made possible by accessing the database within the Statistics Denmark that has a complete data set of all recipients of placement and support for vulnerable children since 1977. By combining these data with data about crime, health care, education, employment benefits and employment history and attaching shadow prices to these outcomes a complete picture of the costs associated with former vulnerable children and youth was calculated.
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

228

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

The benefit-analysis was challenged by the lack of solid Danish evaluations using a experimental or quasi-experimental design. The analysis was therefore based on general assumptions inferred from international and Nordic RCTs and cost benefit analysis of the methods involved. To support the practical use of the CBA a simple tool for computing the specific costs and benefits for a municipality was created and tested in two municipalities. The tool gave each municipality the chance to do different types of calculations with different assumptions about success rate, cost of program implementation and the cost of treatment as usual.

S4-02

In this paper, the external evaluator will present the design of the cost benefit analysis and the central conclusions and implications for the implementation process of the evidence based programs. The presenters will discuss the deliberations and trade-offs made when determining a cost benefit analysis within a concrete policy setting. Keywords: Cost benefit analysis; Marginalised children; Effect analysis;

O 336

1 1 : 1 5 1 2 : 4 5

Friday, 5 October, 2012

Improvement of the cost benefit analysis process by an increased level of communication and trust between experts
E. Beukers 1, M. te Brmmelstroet 1, L. Bertolini 1
1

University of Amsterdam, Human Geography Planning and International Development, Amsterdam, Netherlands

The Cost Benefit Analysis (CBA) is an ex-ante evaluation tool which plays an important role in Dutch and European infrastructure planning. However, many content and process problems occur when performing the CBA, especially when evaluating complex infrastructure projects with wider goals for spatial economic developments. CBA processes in the Netherlands, for example, have been characterized by low levels of communication and trust between the involved experts; the planners and economists. Furthermore, the use of CBA in the Netherlands has been experienced as final examination, instead of as a learning tool to improve plans. An intervention was designed to improve the CBA process and stimulate its learning use instead of its use solely as a final assessment. This intervention aimed to increase the levels of communication and trust between the CBA experts and was tested in two simulations of CBA processes with CBA practitioners. The paper will firstly describe the intervention to increase communication and trust between CBA experts. Secondly, the paper will describe the two simulations. Thirdly and finally, the paper will give and reflect upon the outcomes of applying the intervention. The outcomes were measured through two methods. A survey among the participants which was taken before and after the simulation took place. Furthermore, the experiences of the participants with the intervention were discussed in focus group sessions. Keywords: Cost benefit analysis process; Cost benefit analysis as a learning tool; Communication and trust intervention; Simulation;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

229

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S4-26 Strand 4

Paper session

Evaluation of income support, credit and insurance S4-26 interventions II


O 337

Evaluating the Viennese monetary funding of enterprises Lessons learnt from a portfolio evaluation at regional level
1 1 : 1 5 1 2 : 4 5
I. Fischl 1, S. Sheikh 1, S. Reidl 2, H. Gassler 2
1 2

Friday, 5 October, 2012

Austrian Institute for SME Research, Vienna, Austria JOANNEUM RESEARCH, Policies, Vienna, Austria

This Evaluation, conducted by the Austrian Institute of SME Research in cooperation with Joanneum Research, deals with a programme-portfolio (monetary funding of enterprises) of four funding agencies at regional level (Vienna). 1. The Vienna Business Agency with a focus on the modernisation and increasement of the competitiveness of companies, funds innovations, initiate cooperation, support the opening of new markets and establish international investors and companies in Vienna. 2. The ZIT centre for innovation and technology with a focus on technology, innovation as well as research and development sectors. 3. The departure agency who is responsible for the handling of the special funding programmes which are tailored to the requirements of the creative economy, and supports the Viennese creative industry in combining creativity with economic usability. 4. The waff-agency in the field of labour- market and economic policies, who promotes staff qualification, provides recruitment assistance for companies. The subject of the evaluation is the analysis of the objectives, complementarities and potential overlaps of the 33 programmes during the time period 20052009. Methods applied were the analysis of various documents, interviews, a logic chart analysis and a survey among all supported enterprises. This paper primarily deals with the results and conclusions of the evaluation and will elaborate on the profiles of the four agencies as result of the analysis of the design (i.e. complexity vs. broadness), effects, additionality and the positioning of the programmes in view of other relevant programmes at national level. Further the paper will present a developed approach to position the individual programmes along design aspects and effects in order to get a feasible overview of all analysed programmes of the four agencies. Keywords: Portfolio-evaluation; Regional policy; Funding of enterprises;

O 338

The role of qualitative research in impact evaluation: the case of SKY micro-insurance in Cambodia
I. Ramage 1, K. Ramage 1, K. Nilsen 1, P. A. Lao 1, J. P. Nicewinter 1
1

Domrei Research and Consulting, Phnom Penh, Cambodia

Theory-based impact evaluation, as advocated by Howard White, calls for the use of mixed methods of evaluation in order to fully assess the impact of an intervention. Applying this system to the SKY micro-health insurance program provided a complete assessment of both the positive quantitative results of health insurance on household economic conditions and use of public health facilities, and also the reasons for very low uptake of SKY membership and high dropout rates. We will present the qualitative findings from this impact evaluation, in order to support the use of multiple survey methods to generate a more complete impact evaluation. In this way, the project will be able to make effective recommendations for policy and decision makers at both the local and national level. The public health care system in Cambodia is predominantly financed by user fees, which limits access to health services among the poor and puts families at risk of catastrophic health expenditure. Community based health insurance (CBHI) is one of many interventions aiming to promote more equitable access to health services. SKY (Sokhapheap Krousar Yeung; Health for Our Families) micro health insurance implemented by Groupe de Recherche et dEchanges Technologiques (GRET) is a voluntary, community-based health insurance program, relying on a monthly registration and premium collection system at the family level. It has been operating in Cambodia since 1998 as an innovative model for extending health insurance to underserved urban and rural poor. To assess the health and socioeconomic impacts of SKY, a comprehensive impact evaluation was commissioned involving a longitudinal quasi-experimental quantitative survey and a qualitative survey. The quantitative survey focused on identifying the impacts of SKY on health-seeking behaviour, individual health outcomes, and socioeconomic status of households. However, the survey also showed that the large majority of families eligible for SKY declined to join initially, and the dropout rate among members was very high. Thus, the Village Monographs were developed as an innovative way to gather qualitative data surrounding the SKY micro-health insurance impact evaluation. The objective was to understand the reasons why people become and stay SKY clients, often in the face of contrary information and advice. The villages were identified with particular emphasis on representing various trends in SKY membership. Approximately 30 current and past SKY members were interviewed in each target village. We found that membership is clustered geographically, and is dependent on transportation time and costs, the previous experiences of individuals, and trust in SKY as communicated through family networks and community connections. Interestingly, virtually all respondents made deliberative decisions about their familys health-care coverage, based on the logical consideration of a number of well-articulated factors.

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

230

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

The findings of the Village Monographs complement the quantitative findings, and provide in-depth knowledge about the motivations and reasons for SKY membership uptake. This provides a comprehensive understanding of the impacts of SKY, which will be useful for the Cambodian government and development practitioners as they promote access to equitable health services. Keywords: Insurance; Mixed methods; Theory-based impact evaluation; Developing countries; Healthcare;

S4-26

O 339

An Impact Evaluation of Monetary Transfers Program on well-being on Guatemalas MIFAPRO (MI Familia Progresa)
A. Ruvalcaba 1, J. P. Gutierrez 1
1

Instituto Nacional de Salud Pblica, Cuernavaca, Mexico

This article reviews the results of an impact evaluation of Guatemalas monetary transfer program focusing on consumption indicators. The impact evaluation used panel data with a quasi-experimental design, with households (8,100) classified in three different groups: whether receive their first MT (Monetary Transfers) in 2008 or 2009 and a control group, with households that have not yet received the MT. The eligibility was estimated by a proxy with the base line data without any additional validation with real incorporation time and with the principal variable of exposure to the program. For the Impact Evaluation (according to the follow up survey) the three groups where re-defined according the real incorporation time. The Impact Evaluation on MIFAPRO was estimated with two methodologies: first with a Propensity Score Matching (PSM) with Difference in Difference (DD) with a traditional approach between eligible and not eligible to a defined vicinity among the threshold that defines MIFAPROs eligibility, and eligible households in areas where the program has not yet been implemented. Second, The Regression Discontinuity (RD) with DD approach identifies the effects in households close to the threshold among eligible and not eligible households. In both methodologies the groups where sub-classified in indigenes and not indigenes according to the self-definition of the head of the household. Results for well-being measured by consumption according to Carrol and Kimbal (1996) and The World Bank (1990) show that in general theres a significant and positive effect of MIFAPRO on the well-being of the households measured by consumption. The difference between the control group and the intervention groups where significant in the value of consumption (includes monetary consumption and self-consumption) and the effect is ever greater among the indigenes households. Moreover the households have shown a growth among the three groups (per adult equivalent) between 2009 and 2010. Among all the groups the value of consumption and expenses decreased, this is explained by the macro-economic context that affected the life of Guatemalans that suffered an economic crisis with growth crashing from 3.3 % to 0.6 %, and two natural disasters (a tropical storm and a volcano eruption) consider the greatest in the past decade. The program let the households that receive the MT to maintain a certain level of steady consumption, in other words it prevents them from a mayor crisis. Keywords: PMS DD Poverty Consumption Well-being; Impact Evaluation RD Monetary Transfers;

1 1 : 1 5 1 2 : 4 5

Friday, 5 October, 2012

O 340

Evaluation of Agriculture Recovery Project (through Vouchers and Cash transfer)


F. Khan 1
1

Catholic Relief Services, Islamabad Pakistan, Pakistan

This abstract covers the findings of final evaluation of Agriculture recovery project which was implemented in Northern Sindh province of Pakistan for flood affected communities. The floods of 2010 struck at the time when the rice crop was ready for harvest. Almost one fifth of the Pakistan total area was under water and all standing crops were lost because of the water standing for weeks and months. CRS Pakistan extended its support throught Agriculture Recovery Project to enable people planting for the coming wheat and rice seasons. In order to give beneficiaries choice and to better meet their needs Voucher and Cash grants strategy was adopted, which was never implemented on such a large scale in Pakistan. More than 40 000 housholds were reached during this program for Rabi and Kharif seasons. The program provided beneficiaries wheat seed, rice seed, fertilizers and vegetable seeds through vouchers. Cash grants were provided to for tractor hours and preparation of land for planting. Final evaluation of Agriculture Recovery project was conducted to see the impact of Vouchers and Cash transfer methodlogy and to incoporate lessons learnt in the future programs. Also to see whether Voucher and Cash transfer programming is feasable on large scale during emergencies. The evaluation found that the project had significant positive impacts on the agricultural livelihood recovery of project participants. Quality seeds provision to the farmers was major success. Over 99 % of project participants planted the seed that they received through the project vouchers and 93 % of project particpants related the voucher seed as higher quality than their normal seed. Yields from the program provided seeds was almost double than their traditional seeds. Debt was significantly reduced for project participants, decreasing the time it will take for them to return to pre-flood debt conditions by an average of 50 % over non-project participants. Voucher methodology was very successful in terms of timely provision and quality products while using local vendors. Cash grants proved to be good way for transfering cash to beneficiaries and other similar findings came out from the evaluation which will be shared during conference presentation and discussion. Keywords: Final evaluation; Voucher and Cash transfer programming; Agriculture Recovery; Lessons;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

231

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

S1-18 Strand 1

Paper session

Evaluation networks and knowledge sharing II


S1-18
O 341

Evaluation of the effects of enterprise network projects in EUs cohesion programmes


T. Lahdelma 1, S. Laakso 1, K. Viljamaa 2
1 2

Urban Research TA Ltd, Helsinki, Finland Ramboll Management Consulting, Helsinki, Finland

1 1 : 1 5 1 2 : 4 5

Friday, 5 October, 2012

Support to enterprise networks has an important role in the strategies and implementation of EUs cohesion policy programmes. Clustering and networking has become an instrument to increase the competitiveness of enterprises and consequently, to create output and employment growth in disadvantaged regions. However, there is only weak evidence on the effects of networking projects on competitiveness and business success of participating enterprises as well as on regional development. This study was a pilot project aiming at testing and developing methods for impact evaluation of networking projects funded from the EUs cohesion programmes. We applied network analysis to identify a firms network position and used statistical methods to test the significance of the identified network features explaining the achievement of targets related to the project and the competitiveness and growth of the firm. We gathered data on the cooperation relations between firms and on their assessments on the benefits from participating in networking and clustering projects in Central Finland region funded by the European Regional Development Fund during 20082011. In addition, we formed comparison groups of firms operating in the same region and same industries. We found that firms participating in networking projects are faster growing, more productive and more profitable than firms of the reference group on average. To assess whether participating in a project has an effect on growth rate, productivity or profit, we tested the association between network activity measured as centrality in a network and the productivity of participating firms before and after the project. We found that centrality in the sense of being well connected to other firms does not predict productivity, but firms occupying a mediating position between other firms show significantly higher productivity growth rates. The association between a network position and the growth of productivity indicates the positive impact of networking projects. Being in a mediating position predicts also significantly higher satisfaction with the project. Moreover, we found that the position of firms in relations among groups based on industry is associated with the perceived benefits. The results of this pilot study should be interpreted with caution because they are based on a small number of case projects. However, according to the results, the network characteristics identified by network analysis can predict firm level business indicators like productivity growth and firms assessment on the benefits from participating in projects. Network analysis can give information on the functioning of networking and clustering projects which can be useful for the decision makers of EUs cohesion programmes and for regional authorities. Keywords: Network analysis; Enterprise network; Cohesion policy;

O 342

Network in Practice holistic evaluation of a innovative social programme


R. Godinho M. C. 1, C. Pereira 1
1

IESE- Instituto de Estudos Sociais e Econmicos, Evaluation, Lisboa, Portugal

Purpose and scope of the evaluation: The Social Network Programme was created in Portugal in 1997, in a context of affirmation of active social policies, based on the mobilization of general society and each particular entity in territorial intervention, with the commitment of local actors in activation and optimization of resources, for the eradication of social exclusion in Portugal. The implementation of the Social Network Programme was done in a phased manner starting with a pilot program in 2000 and then in successive stages of enlargement, across all administrative districts. The nature and the structures of the Social Network Programme have a key role in strategic planning and integrated intervention in the territories, in the field of social welfare and local development. The fact of being streamlined in partnership, mainly by public sector bodies (decentralized services and local authorities), solidarity institutions and other entities, who know the problems, have standing to intervene and also a privileged position in close proximity to citizens, magnifies its importance and its innovative role. Objectives of the evaluation: Detect the types of impacts generated by the activities of the partnership structures in the Social Network. Building typologies for the different patterns of territorial social network. Develop tools that enable the management of partnership work, as well as monitoring and self-evaluation of the results of activity of these structures, providing a experience guide that reflect networking in the operationalization of interventions. Innovative Methodology: The evaluation exercise was guided by the following principles: Inspiration in the methodologies of realistic evaluation, including experimental programs in the context of active social policies; Double approach of the evaluation exercise: both formative and summative; Recognition of the evaluation process as a learning opportunity for the improvement of public services, through their involvement in exercises of critical reflexivity.
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

232

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

Involvement of different participants (decision makers, managers or technicians) as co-authors of a systematic reflexive process on the implementation of the Social Network Programme and its results. Establishment of a new role of the evaluation function (and profile of evaluators) in the development of public policies in Portugal, that need to adopt a new approach that encourages the participation of different stakeholders in the evaluation, by effective ways of involving them. In addition to ensuring data collection and communication of evaluation findings, the evaluation team is also responsible to promote stakeholder participation, mobilizing skills to facilitate dialogue and mainstreaming (vertical and horizontal).

S1-18

Given the complexity of the subject, the methodological device used to implement such principles focuses on stakeholder involvement, while active participants in the evaluation process, in a dynamic process of interaction with the evaluation team. The perception of the results generated by the joint action of almost 700 local structures involved systematic analyzes organized into four domains of evaluation, 12 evaluation dimensions and 25 evaluation questions. The set of techniques is vast and includes, among others: Multi online survey (with more than 2500 participants) 1 Focus-Group 4 Steering Groups 2 Delphi Process 20 Case Studies Benchmarking Typological Analysis Collaborative Platform Keywords: Social Network; Participatory Evaluation; Institutional Partnership; Multi-method perspective; Evaluation governance;

1 1 : 1 5 1 2 : 4 5

Friday, 5 October, 2012

O 343

Evaluating Microtends, Weak Signals and Complex Social Structures


P. Uusikyla 1
1

Net Effect Ltd., Helsinki, Finland

Evaluation paradigms are in flux. Today, most evaluation frameworks (both formative and summative) together with the designs applied seem to produce rather expected, non-surprising and non-innovative findings and policy recommendations to complex societal problems. This is likely to weaken the utilization of evaluations and thus endangar the legitimitation of evaluation institution as a whole. Why is that and what can we do about it? This paper argues that the reasons for that are both exogenous and indogenous. Firstly, there is a fundamental flaw in framing evaluation questions: obvious questions give obvious answers. Secondly, data applied has often problems of validity, i.e. statistics, surveys and structured interview questions support poor working hypotheses behind evaluative inquiries and thirdly, institutions that commission evaluations tend to more eager to minimize risks (both political and administrative) and form their terms of references with the bureauratic mindsets. By shifting focus from neopositivistic approaches to systemic and user-driven methods of social inquiry (e.g. analysis of weak signals, crowdsourcing, mystery shopping and social network analysis) we are likely to revitalise the use of evaluation as a tool for innovative learning. This paper is not only theoretical, but also presents some case examples of the application of systemic evaluation methods.

O 344

Web 2.0 in Evaluation Practice Requirements and a First Approach


C. Walloth 1, S. Siegel 2
1 2

University of Duisburg-Essen, Institute of City Planning and Urban Design, Essen, Germany evalux, IT development, Berlin, Germany

The Web 2.0 has penetrated our society. Online feedback, rating, commenting or, most simple, liking, has become a routine in daily life. In stark contrast, many of the established evaluation tools look old-fashioned and risk to trigger nothing but a weary smile on the faces of our clients. Hence, we, the evaluation practitioners, cannot afford to appear backward with methods not adopting the merits of the Web 2.0. Depending on the context, Web 2.0 tools can safe a lot of manual work of the evaluator. Clients will adopt their requirements accordingly: First, they will ask for the less costly process which requires less time (and cost) of evaluation experts. Second, they will ask for a more comprehensive data foundation, which now can be achieved with Web 2.0 tools. Both are vital: The qualitative enhancement of evaluation methods is ultimately required by the ever increasing speed and complexity of the globalized environment into which our organizational clients are embedded. This scenario demands the evaluation society to share their experience in the application of Web 2.0 tools experience of which not much is available yet. With our contribution, we attempt to make the first two steps ahead: First, by discussing some concerns related to Web 2.0 tools in evaluation. Second, by presenting key elements of an operationalized Web 2.0 process. We will focus on participative evaluation processes and provide insight from our own experience and examples of tools, and we will call for your contribution in the discussion following our presentation. How does the application of Web 2.0 tools influence evaluation processes? Who will eagerly participate and who will be reluctant? Does the quality of knowledge increase along with the quantity? And will results be accepted or doubted? Along the lines of these important questions, we will lay the ground for a sound application of Web 2.0 tools in certain evaluation settings. Interaction, collaboration and non-linear knowledge formation are key characteristics of Web 2.0 processes. How can such dynamic processes be operationalized? How can it be moderated and steered? Which functionality is required by the evaluator? And how can acceptance for this process be reached on the client side? These are the questions which have led to the design of a Web 2.0-based process which we will visualize using examples from our evaluation projects. Finally, we would like to engage into a discussion with the audience about other experience using Web 2.0 tools in evaluation processes. Which requirements did you need to meet? Which key learnings can you contribute? And where do you see further methodological work needed? Keywords: Web 2.0; Participation; Evaluation tools; Technical requirements;
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

233

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

Poster session
PS1 Strand 1 Poster session

P-001

P 001

The learning network on disability issues to develop accessible and user needs based practices: evaluation of the starting phase
H. Anttila 1, A. Autio 1, P. Nurmi-Koikkalainen 1
1

National Institute for Health and Welfare, Service System Department, Helsinki, Finland

Background: The United Nations Convention on the Rights of Persons with Disabilities (CRPD), the Finnish Disability Policy Programme 20102015, and the new assistive technology regulation emphasize citizen rights and accessible needs based public and specialized services for people with disabilities. However, public service processes differ across municipalities hampering equal accessibility and quality of services. To address this issue, The National Institute for Health and Welfare started in autumn 2011 to build a learning network on disability issues to identify, co-develop, evaluate and disseminate user needs based practices. The learning network on disability issues, as one of the eight thematic learning networks in Innovillage (www.innokyla.fi), will be a permanent network to foster learning and knowledge generation of innovative practices and networking between developers, professionals, decision makers and users. The Innovillage project is run by THL, the Association of Finnish Local and Regional Authorities and the Finnish Society for Social and Health, and funded by TEKES and the Ministry of Social Affairs and Health. Objectives: To describe 1) the key objectives, working methods and challenges of the learning network on disability issues, 2) the starting phase results on networking and identified themes, learning and satisfaction of the participants and new ideas, and 3) discuss the future directions and challenges. Methods: An action plan and risk analysis of initiating a learning network was performed in October 2011. The starting phase results were evaluated in March 2012 by screening the network participant list and planning and evaluation matrixes and participant feedbacks of each held workshop. Results: The objectives for the learning network were defined as follows: 1) to identify the key challenges and invitation of participants to the network 2) to agree on shared objectives and working methods, 3) to work systematically on the set objectives, and 4) to disseminate the results. The learning network is coordinated by three coordinators, who negotiate and organize the networking, react on proposed development ideas and organize learning workshops together with network participants. By end of March, two tutors and 110 participants have joined the network email-list, and many participated in 5 half-day workshops on proposed themes: children with disabilities and their families, development ideas for the disability field and the National Development Programme for Social Welfare and Health Care, public procurement, community dwelling of neurologically injured people, and service processes. The participant feedback has been mainly positive and several development themes have been suggested. Four follow-up or spin-off workshops have been appointed. Challenges include scarce time resources of both the participants and the coordinators and workshops held in the capital leading to long travel times for some of the possible participants, difficulties in the use of the new REA-tool for describing and evaluating practices, and the development phase of the Innovillage environment. Keywords: Disability; Network; Evaluation; Practices; User needs;

P-002

P 002

Challenges and opportunities in evaluating a national web-based learning community for niche research career development
F. Lawrenz 1, M. Thao 1
1

University of Minnesota, Educational Psychology, Minneapolis, USA

The advent of social networking via the internet has allowed communication among formerly isolated or widely dispersed individuals. In the case of research groups this enhances the research capacity by connecting people at distant locations with similar research interests and by affording more opportunity for interdisciplinary research throughout the world. However, having project clients and participants at diverse locations requires rethinking of traditional approaches to evaluation. In some respects the presence of the internet allows quick communication and written documentation of conversations. It also allows for the digital capturing of video presentations or other types of communication that can then be used as data for evaluations. On the other hand, conducting evaluation at a distance raises both practical and validity issues. Researchers involved with plant breeding are a group encompassing a variety of expertise and tend to be spread thinly across the globe. A recent $6 million grant was provided in the U.S. to enhance the science of plant breeding and to increase the numbers, diversity and skills of people studying plant breeding at the PhD level. The project is relying heavily on developing a networked learning community. This complex project has five major clusters of activities including: relationship building with Minority Serving Institutions (MSIs), fostering of social networking between and among all groups involved in the project, on-line course development and presentation, incorporation of inquiry based learning into courses, and development and use of motivational videos/curriculum in courses at a variety of levels. The belief is that these activities will produce more, better prepared and connected plant breeders who come from diverse communities, as well as improved content and teaching procedures in plant breeding courses. The comprehensive evaluation was designed to assess outcomes through monitoring, social networking, surveys, interviews, participant observation, focus groups, case studies, and targeted research projects.
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

234

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

Because of the networked nature of the project, the evaluation offered both unique opportunities and substantial challenges. The presentation will provide the evaluators perspectives on what worked and what didnt work in this networked environment. One of the first issues was interacting with the project personnel and participants to accomplish evaluation tasks such as constructing the logic model or interviewing participants. We used a combination of web and phone conferencing, which required balancing preferences and accommodating people from different places. We also built in in-person activities to foster relationships such as conducting site visits and flying participants in for a focus group. The evaluation also made use of the available audio recordings of the communications that took place among participants via the web community hub. This allowed the evaluation to examine the type and depth of communication in terms of how passive or active the members were in connecting with each other. For example, how proactive the professors were in interacting with students at different sites and what the students were most interested in discussing. The presentation of this insider knowledge about conducting evaluation in a networked environment will provide valuable insights for other evaluators. Keywords: Internet community; Research training; Plant breeding;

P-003

P 003

Network in Practice holistic evaluation of a innovative social programme


C. Pereira 1, R. Godinho M. C. 2
1 2

Instituto de Estudos Sociais e Econmicos, Lisboa, Portugal Instituto de Estudos Sociais e Econmicos, Evaluation, Lisboa, Portugal

Purpose and scope of the evaluation: The Social Network Programme was created in Portugal in 1997, in a context of affirmation of active social policies, based on the mobilization of general society and each particular entity in territorial intervention, with the commitment of local actors in activation and optimization of resources, for the eradication of social exclusion in Portugal. The implementation of the Social Network Programme was done in a phased manner starting with a pilot program in 2000 and then in successive stages of enlargement, across all administrative districts. The nature and the structures of the Social Network Programme have a key role in strategic planning and integrated intervention in the territories, in the field of social welfare and local development. The fact of being streamlined in partnership, mainly by public sector bodies (decentralized services and local authorities), solidarity institutions and other entities, who know the problems, have standing to intervene and also a privileged position in close proximity to citizens, magnifies its importance and its innovative role. Objectives of the evaluation: Detect the types of impacts generated by the activities of the partnership structures in the Social Network. Building typologies for the different patterns of territorial social network. Develop tools that enable the management of partnership work, as well as monitoring and self-evaluation of the results of activity of these structures, providing a experience guide that reflect networking in the operationalization of interventions. Innovative Methodology: The evaluation exercise was guided by the following principles: Inspiration in the methodologies of realistic evaluation, including experimental programs in the context of active social policies; Double approach of the evaluation exercise: both formative and summative; Recognition of the evaluation process as a learning opportunity for the improvement of public services, through their involvement in exercises of critical reflexivity. Involvement of different participants (decision makers, managers or technicians) as co-authors of a systematic reflexive process on the implementation of the Social Network Programme and its results. Establishment of a new role of the evaluation function (and profile of evaluators) in the development of public policies in Portugal, that need to adopt a new approach that encourages the participation of different stakeholders in the evaluation, by effective ways of involving them. In addition to ensuring data collection and communication of evaluation findings, the evaluation team is also responsible to promote stakeholder participation, mobilizing skills to facilitate dialogue and mainstreaming (vertical and horizontal). Given the complexity of the subject, the methodological device used to implement such principles focuses on stakeholder involvement, while active participants in the evaluation process, in a dynamic process of interaction with the evaluation team. The perception of the results generated by the joint action of almost 700 local structures involved systematic analyzes organized into four domains of evaluation, 12 evaluation dimensions and 25 evaluation questions. The set of techniques is vast and includes, among others: Multi online survey (with more than 2500 participants) 1 Focus-Group 4 Steering Groups 2 Delphi Process 20 Case Studies Benchmarking Typological Analysis Collaborative Platform Keywords: Participatory Evaluation; Social Network; Institutional Partnership; Multi-Method Perspective; Evaluation Governance;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

235

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

PS2 Strand 2

Poster session

P-004

P 004

Company, society and evaluation in Latin American. New concepts, new solutions.
Since the year 2000 different evaluation tools have begun to emerge throughout Latin America. These tools assess the degree of adequacy of the private sector and the correct management of social corporate responsibility. They have a self-diagnostic feature which, in a first stage, endows each company with assessment as well as an awareness of its social corporate responsibility at an internal and external level. They are structured on the basis of a number of target groups or areas of interest with which both the private sector and the companies relate (stakeholders). At the same time these groups or areas have a number of quantitative and qualitative indicators, evaluation scales and balance score cards enabling companies to acknowledge the quality of the links established with these areas or groups. Also, these indicators have been adapted and agree with the U N Global Pact Principles and the Millennium Development Goals. In fact, in many instruments the indicators in line with the Global Pact Principles and the Millennium Development Goals constitute a core of guiding principles to adhere to. Use and application of these tools require three kinds of processes from the private sector and companies: individual and participatory application processes; technological processes for data-processing and issuing of corporate reports; and, finally, processes for interactive communication, knowledge, evaluation, dialogue, negotiation and planning of their action impact, the seeking of solutions and the setting of goals for further improvement. In Latin America these evaluation tools are the starting and generally applied point for each company to address social responsibility. Once companies analyse the information provided they choose priority areas of work and capacity development. The tools are also utilized as regular monitoring and evaluation (M&E) instruments. These evaluation tools have been created by different Latin American organizations which promote corporate social responsibility in the private sector through the use of evaluation tools for capacity-building in this area. Among these organizations there are company foundations, business schools and chambers. This work presents the different evaluation tools. Later it shows the target or area groups proposed by each tool and the indicators to measure and evaluate. Subsequently, it reviews the stages in an evaluation process carried out inside the company. Finally, it makes proposals to improve evaluation instruments and the building of evaluation capacity in the private sector and the companies. Keywords: Corporate social responsibility; Evaluations tools; Regional development organizations; Stakeholders; Methodologies and practices;

P-005

P 005

The real time evaluation and the ex-post evaluation: Two evaluation models to compare
B. Mineo 1, E. Cabezas 2
1 2

Save the children, International department, Madrid, Spain Universidad complutense de Madrid, Social sciences, Madrid, Spain

The main purpose of this poster is to show a comparative analysis between the Real Time Evaluation (RTE) and the Ex-post Evaluation in order to illustrate the main differences and similarities of these two ways of assessing the objects merit and worth. The Real Time evaluation is a study normally used during a mayor humanitarian crisis and his main objective is to provide feedback and orientation to those executing and managing the humanitarian response intervention in a participatory way and in real time. The Ex-post evaluation of a humanitarian action intervention is a retrospective study and his main objective is to look back at recent past, to assess the objectives merit and worth and extract learning for the future. Nevertheless, these two models are evaluations and as such they share many common patterns. Some of the mayor difficulties faced by the ones who are responsible for a RTE and also by the evaluators, is the definition and the validation of the appropriate methods as well as the aspects to be studied. This poster will illustrate some relevant questions as: the pertinence of using all the criteria set by the DAC within the OECD; the pertinence of including some other criteria; the intended use and focus of these two different evaluation models, the timeframe and the audience. The final objective of this work is to provide a clear picture illustrating the purpose, methods and timeline for each evaluation model in order to orient the choice of evaluators and the intended users when they have to assess the need and the relevance of doing a Real Time Evaluation in a specific context. The present poster will be developed using a real case: the humanitarian Response undertaken by Save the Children in Ivory Cost from April to September 2011. The analysis will be done by comparing the Real Time Evaluation done in July 2011with the Ex-post Evaluation of this humanitarian response completed in June 2012. Keywords: Real time evaluation; Ex-Post evaluation; Evaluation criteria; Comparative analysis; Intended users;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

236

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

P-006

P 006

An Industrial Engineering (IE) Model for Valuing Outcomes in Service and Government Organizations A new concept for Evaluators in a networked society?
S. Premakanthan 1
1

Symbiotic International Consulting Services (SICS), Ottawa Ontario, Canada

Evaluation community all over the world are engaged in creating a wealth of performance results evidence to value the outcomes of their investments in society, to improve the quality of life of people, especially the under privileged. The Industrial Engineering (IE) approach is one of the tools for evaluators to use in planning for gathering evidence to influence risk informed evidence based decisons. The application of this tool in the planning and design stage of programs, projects and initiatives will ensure readiness for evaluating the effectiveness (valuing outcomes benefits). The investment of time and money in ensuring the readiness of programs for evaluation is the focus of this paper. The Industrial Engineering (IE) approach to Defining Development Performance Results leads to the design of the 10 th Order Performance-Metric Structure. The structure is an orderly approach to developing quantitative controls for managing government and service organizations. The approach defines the statement of performance and determines the performance-metrics that are to be counted. It is both a top down and a bottom up approach and lays the foundation for measuring, monitoring, evaluating and controlling and reporting on organizational and development policies, programs, projects and initiatives and activities. The hierarchical approach to defining organizational performance metrics links the upper strategic management control system with the lower operational management control systems. It is a framework for developing credible strategic integrated performance information (SIPI) for decision-making. It would satisfy the performance results management information needs of an organization, doners, Central Agencies, Parliamentarians and the Citizens. Further, the I.E approach rather than being competitive to other approaches to good management in the public and private sectors is integrative. It is a framework for the application of other management improvement initiatives, tools and techniques. What gets clearly defined, is measured, monitored, evaluated and reported for risk informed evidence based decision making. Keywords: Industrial Engineering; Performance Results; 10th Order Performance Results Structure; Valuing Outcomes; Planning;

P-007

P 007

Performance indicators as novel monitoring tools for the official Network of European Reference Laboratories for animal health and food safety
C. Vasilescu 1, M. Pletschette 1
1

European Commission, Evaluation Unit DG Health and Consumers, Brussels, Belgium

Background: Performance indicators (PIs) are increasingly used in public administrations not only to ensure accountability and to link resources to results, but also to promote a developmental learning culture in the organisation (S. Goh, 2012). The purpose of applying such PIs is to improve processes within organisations through defining minimal but also desirable quality standards via benchmarking. The European Union (EU) supports a network of 44 Reference Laboratories (EURLs) in various technical areas of microbial and chemical contaminants in order to improve and harmonise methods applied by the National Reference Laboratories the EURLs liaise with. We present here as a follow-on effort to an outsourced evaluation of the EURL network (Civic Consulting, 2011) an exercise consisting in the development of process related PIs in order to empower the EURLs and the European Commission to ensure continued high quality output and results. Objectives: PIs will be introduced over the next two years allowing the Commission services to establish a reasoned and evidence-based correlation between the laboratory-submitted annual work programmes (AWPs) and the allocated budget of the EURL. While the use of these indicators is primarily intended for Commission services, EURLs themselves may also wish to employ the system, for their own appropriation process requirements. Methods: To foster grassroots ownership, a critical success factor in making performance measurement systems more effective (S. Goh, 2012), we have pilot tested the list of identified indicators with the EURLs. A standard operating procedure for use of the indicators was developed and disseminated and feedback collected during two workshops and the EURLs annual general meeting. Two types of PIs were formulated: A. Activity-based indicators were defined in order to establish a rational link with the various laboratory operations under the EURL mandate. They naturally draw on the tasks and duties described in the AWPs and the legal provisions establishing the network e.g. number of tests performed or seminars given. B. Qualification indicators reflect the additional ability of the individual laboratories to perform above the minimum level captured by CEN ISO 17025 (all EURLs are accredited according to this generic standard) and are linked back to the various described activities of the laboratories. Results: The PIs defined form a monitoring tool allowing to trace the performance of tasks and duties captured in AWPs along two axes: comparatively between laboratories and individually for each laboratory year on year. The aim of the exercise is not to make funding dependent on automatic adherence to these indicators, but to ensure that the activities described in the AWP are indeed reasonably funded. The methods are equally expected to consolidate the network via emulation. References: http://www.civic-consulting.de/ Goh S. C., (2012) Making Performance Measurement Systems More Effective in Public Sector Organizations, Measuring Business Excellence, Vol.16 Iss: 1 Keywords: European Union; Performance measurement; Reference laboratories; Performance indicators; Public management;
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

237

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

PS3 Strand 3

Poster session

P-008

P 008

The success story of a community of practice for evaluation students


M. Gervais 1
1

Universite Laval, Quebec, Canada

This presentation will trace the key moments of a pilot project to establish a community of practice (CoP) for a group of students recently admitted to Universit Lavals masters in community health program, in the evaluation concentration. A dozen students were placed at that time under the responsibility of a professor, which produced a major supervisory challenge in real time with significant constraints in terms of means, resources, and time. The CoP was then considered a promising solution for maximizing student learning opportunities and helping them through to graduation. These students, who were novices in evaluation and had widely varying educational, geographic, and cultural backgrounds, spent two to three years within this community. This presentation will explore 1) the initial context and the main implementation steps, 2) practices for motivating actors and building cooperation, 3) influencing factors, 4) management strategies, 5) and strengths, vulnerable areas, and sustainability conditions for the CoP. It will explore more specifically how the CoP grew and organized itself, from its first steps to its co-construction with students, and how it adapted over time to handle emerging issues. It will present the vision that was espoused and the participative, enabling approach that was used. It will clarify the roles and responsibilities of the overseeing professor and the students. It will show how the CoP helped students gradually develop key competencies through repeated exposure to experimental situations in real time that were strongly tied to evaluators daily practice requirements. For example, students took part in CES student annual competitions, presented papers at the CES and AfrEA conferences, contributed as volunteers and session managers at the CES and SQEP conferences, experimentedunder supervisionwith editing manuscripts (CJPE) and scientific papers (AEA), and were awarded travel grants and funds from recognized grant organizations for projects in developing countries. More locally, they publicly presented their research protocol and their masters results at Universit Laval and in the practice communities concerned, mentored students recently admitted to the CoP, and critically read chapters from CoP colleagues theses. The take-away from these experiences is the CoPs great vitality and its members regular investments toward making it a stimulating community that enhanced learning. Through them we should also see proof of the promising nature of a learning path developed initially based on individual needs, then strengthened through group learning. This presentation will give CoP students the floor. They will be able to express their pride in belonging to the community, their personal contributions to the success of the CoP, as well as its positive effects on their careers as evaluators (sense of personal confidence, understanding of the evaluation field, outside recognition of their compentencies, ability to obtain contracts and evaluation positions, etc.). In conclusion, this presentation will show how the CoP grew from a simple pilot project launched by one professor into a community that reaches all students currently specializing in evaluation. This highly positive outcome may prove a source of inspiration for universities currently seeking to optimize their student training resources. Keywords: Community of practice;

PS4 Strand 4

Poster session

P-009

P 009

Evaluation in global health: Where are we now? What have we learned?


M. Gervais 1
1

Universite Laval, Quebec, Canada

This presentation will critically examine the authors global health evaluation experience in developing countries, mainly in French-speaking areas. It will first look at certain gains, important advances, and strengths derived from current practice, both from the evaluators viewpoint and the viewpoint of the field and the actors and organizations concerned by the evaluation projects. It will then take a more in-depth look at certain strategic and operational aspects of global health evaluative practices, which are currently perceived as suboptimal. These will be discussed in terms of deficits and priority investment areas for improvement which are as follows: 1. Legitimacy deficit in connection with evaluation projects: For what purpose? For whom? Which values should be favored? What agenda? Using which success criteria? Prescriptive role of sponsors; greater interest in effectiveness and efficiency than in relevance; political interests, hijacking and poor use of evaluation results; etc. 2. Credibility deficit: Evaluation subject difficult to define in a consensual way (complexity, systemic component, social overlap [actors, partners], context dependency); scientific rigor required (Which approach? Which method? Difficulty showing the effects [attribution vs. contribution], threats to validity, necessary contextualization of the results, innovation, etc.) 3. Feasibility deficit: The need to work in real time and therefore in a dynamic, evolving context that is restricted in terms of resources, field conditions, and time; negotiation and adaptation; management of risks inherent to the evaluation process; consideration of human, political, and ethical factors; etc. 4. Influence deficit: Underutilization of knowledge produced by the evaluation (results and lessons learned from the process), strategies for using the results, weak link to the decision/management cycle, ability to translate the results using a stakeholder approach, identification of effects/consequences including for whom, ability to produce the anticipated change, etc. 5. Sustainability deficit: Instability, changing stakeholders (including sponsors) and political/administrative priorities, fragility of resources and the structure, irregular maintenance of gains/services after the project, difficulty sustaining partnerships, etc. Several courses of action arising from the areas identified will then be spotlighted in view of improving evaluative practice in the context of global health and ensuring its success. Keywords: Global health;
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

238

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

P-010

P 010

Evaluation of information access in health service delivery in Ukraine


A. Goroshko 1
1

Kiev International Institute of Sociology, Kyiv, Ukraine

Involvement of citizens and civil society groups in health care system decision-making in Ukraine is extremely low. Awareness about patiences rights and experience in rights protection, access to information about revenue and expenditure, undeveloped system of complaints and feed-back are the main factors, which obstruct citizens control and participation in health care system development. Furthermore, Ukrainian health care system in deeply corrupted, government expenditure on health care forms only 57 % from total health care expenditure (WHO, 2009), while free of charge health care services are declared as constitutional right of every citizen. In 2011 Ukrainian parliament passed the law on access to public information, which provides an opportunity to community representatives to get an access to information about procurement and financing of health care system. Meanwhile, the initiatives on creating of oversights boards in hospitals or health care departments are started. These two circumstances give an opportunity for patients empowerment and involvement. In 2011 evaluation of information access in health care system was conducted. Methodology of research included legislation analysis, semi-structured interviews with stakeholders and structured interviews with patients. Evaluation included wide range of topics, but we would like to tell about access to information about budgeting and health care system monitoring. Health care system in Ukraine is characterized by experts as underfunded and the allocation of resources as inefficient. Survey has shown, that although citizens have a right to get information on health care facilities acceptance of state financing and private donations, the law is indistinct about what kind of budget information should be provided. This uncertainty gives an opportunity to authorities to provide information, which is incomparable with other health care facilities data, difficult for understand and analyze. Access to information about private donation is even more difficult: health care facilities have a right to establish charity founds and get additional resources from patients. Expert claimed, that usually these found are registered for third persons and it is almost impossible to track whether money actually reach clinics. Furthermore, authorities frequently call information about budget and private donations allocation as unnecessary for average citizen. Concerning audit and monitoring of health care delivery, there is a great difference between government and citizen monitoring, although there is legal basis for both types of monitoring. The government monitoring conducts on regular basis and is strictly regulated by law. System of government monitoring is concentrated only on the outputs; there is very low interest to the results of treatment and its efficiency. Concerning to citizens monitoring, very few experts could name such initiative and all of them were related to HIV (as there are international and local donors, who are interested in development of citizens involvement in health care decision-making). In conclusion we can say that despite the existence of legal framework, which assure the citizens right to get an access to information and to monitor hospitals/clinics, in practice there are no conditions that give an opportunity for citizens involvement in decision-making and public society control. Keywords: Health service delivery; Ukraine; Patients empowerment; Evaluation of information access;

P-011

P 011

Data triangulation for a multicultural intervention for the HIV prevention focus on indigenous people in Guatemala.
D. Guzmn 1, J. E. Zelaya 2, B. Trejo Valdivia 3, J. A. Matute 4, M. E. Pena Reyes 5
1 2

INSP, MCs Biostatistics, Cuernavaca, Mexico UNAIDS, Coordinador de ONUSIDA para Guatemala y Mexico, Guatemala, Guatemala 3 INSP, Direccin Evaluacin y Encuestas, Cuernavaca, Mexico 4 CIENSA, Biostatistics, Guatemala, Guatemala 5 ENAH, Posgrado Antropologa Fsica, Distrito Federal, Mexico

As part of the monitoring and evaluation activities in the frame of action of the Project for Vulnerable Populations, implemented by UNAIDS-Guatemala (2010), with the financial support of the Netherlands Embassy in Guatemala. A data triangulation research was conducted to assess the results of a National Campaign for the Prevention of the HIV epidemic in the Mayan population of eastern Guatemala, translated from Spanish to mayan languages (Kakchiquel, Kiche and Mam). The study consists in using the baseline information as a primary source of information, as secondary source of information the CENSUS (2002) and semi-structure inteviews that was conducted with key informants of the NGO-ASECSA and telephone interviews with the radio station managers. The population reached by the prevention campaign in the area of influence of the project, was estimated in 155,760 and the signal spots arrives to 546,588 people counted in the CENSUS. This research is one of the few exercises performed in evaluate media campaing specialiced in multicultural prevention against the HIV epidemic in Central America. Keywords: Buffer zone; Radio spots; Influence area; HIV; Prevention;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

239

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

P-012

P 012

Study for asses the capacity for the attention and prevention of the HIV-AIDS, in Guatemala
D. Guzmn 1, J. A. Matute 2, M. E. Pena Reyes 3
1 2

INSP, MCs Biostatistics, Cuernavaca, Mexico CIENSA, Biostatistics, Guatemala, Guatemala 3 ENAH, Posgrado Antropologa Fsica, Distrito Federal, Mexico

Abstract The survey were conducted after the Stan hurricane (2005), considering the government sector and the nongovernmental institutions involved in the respond to the epidemic of HIV-AIDS Guatemala. The financial support of The Global Fund to Fight AIDS, Tuberculosis and Malaria start in 2004 and the study were performed at the end of 2005. The research objective was identifying those institutions that could participate in the intensification of activities in prevention and integrated care among vulnerable groups in priority areas in Guatemala. The survey allows knowing the quantity of personnel devoted to the attention, abogacy, prevention and, counseling for STI/HIV/AIDS. Among other findings the principals activities of the participant institution were identified as condom use demonstration, sampling for diagnostic of STI/HIV/AIDS. Keywords: HIV/AIDS;

P-013

P 013

How to prepare the external evaluation of EU Agencies: the case of regulatory agencies supervised by DG Health and Consumers
M. Horodyska 1
1

European Commission DG Health and Consumers, Unit Audit and Evaluation, Brussels, Belgium

Background: The European Union Regulatory Agencies are independent bodies with their own legal personality which have been set up in successive waves in order to meet specific needs. They are funded by the EU budget as well as by the direct receipt of fees or payments. DG Health and Consumers has three agencies under its policy supervision: European Food Safety Authority (EFSA), European Medical Agency (EMA) and European Centre of Disease Prevention and Control (ECDC). The founding regulations require regular external evaluations of the Agencies achievements and all DG Health and Consumers Agencies were already once evaluated (2005 EFSA, 2008 ECDC, 2010 EMA). Unfortunately, those relatively costly exercises did not always bring all of the expected quality results with relevant recommendations on improvements needed. Purpose_ Our analysis of the previous Agencies evaluations indicated that the opportunity of learning strategic messages was not fully taken partly because of the inaccurate design of these assessments. Too much involvement of the Agencies Management Boards in drafting Task Specifications, insufficiently precise evaluation questions, conservative usage of evaluation tools, biased selection of stakeholders or superficial analysis were seen as reasons that evaluations results could not be fully used. It was also found that the special EU regulatory context in which the Agencies are functioning was not taken into account. Methods: For this reason, a new approach was introduced for the preparation of the Task Specification for the second ongoing ECDC evaluation to make sure the design reflects the specificity of the organisations assessment. This preparatory process consisted of the following steps: 1. Comprehensive review of all activities of the ECDC; 2. Constructing a logigram with graphic presentation of logical relationships between inputs activities outputs outcomes; 3. Selecting the priority ECDC tasks/chain of activities for evaluation, including horizontal issues; 4. Weighting of the ECDC tasks on the scale of four points: critical, very important, moderately important, and less important; 5. Qualifying of the chosen tasks into the formative and summative categories of evaluation areas; 6. Developing evaluation questions with focus on qualitative and quantitative aspects. Conclusions: As this comprehensive preparatory approach was used for the fist time in our Directorate General it required additional effort from both sides the Centre and the Commission to ensure a constructive dialogue and involvement of all actors concerned. The mapping exercise and reconstruction of the logigram of the agency gave the comprehensive picture of the Agencys activities whereas prioritising helped choosing the areas which should be evaluated. Disadvantages included that not all steps were sufficiently understood in the process and extensive explanations and support were required from the Commissions side. The second external evaluation of the ECDC is still ongoing. Results and conclusions will be available by the end of 2012. Keywords: EU agency; Logigram; Evaluation design;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

240

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

P-014

P 014

Meeting on the Screen: An opinion about scientific evaluation put into practice in remote panel meetings
J. Latikka 1, K. Valosaari 1, J. Hakap 1, J. Trnroos 1, S. Illman 1
1

Academy of Finland, Helsinki, Finland

The Academy of Finland is the prime funding agency for basic research in Finland. The research units (Natural sciences and engineering, Health, Biosciences and environment, and Culture and society) were given a task to investigate whether remote panels would be a suitable option to do the evaluation of research proposals, and more importantly, how the panel members would see the remote panels. The Academy of Finland received more than 3,000 applications within its September 2011 call. Most of these proposals were evaluated in expert panels. After the panel meetings, the panels were asked if they would have participated in case the panels would have been arranged as remote panels. The questions were: Would the panel members have accepted an invitation to participate in a remote panel meeting (such as video meeting)? and Do the panel members have access to facilities needed for participating in remote meetings?. The questions were asked for all the panels for the all four research units, totaling to 67 panels with a total of 463 evaluators from around 20 countries. The general opinion was that the panelists do prefer strongly the face to face meetings. Some of the panelists would even have declined to participate to a remote meeting. The panelists foresaw many problems concerning the remote meetings, such as they would work only with a few participants, and if they were already familiar with each other. Video was seen necessary, as with phone meetings the dynamics suffer even more, and it is difficult to participate to the discussion. Skype was also seen as an option. Remote meetings would have several technical risks, including interruptions with participants missing from the panel. Privacy and security issues were also raised. It was seen that remote meetings can handle simple discussions, but not evaluation. Remote meeting facilities could be used as a possible way to discuss with external experts during the panel meeting about specific proposals. In addition, chairing a remote panel would be extremely difficult and challenging, especially with the larger panels. The remote meeting would also result to less concentration, and it was felt that people are more committed when they travel to a panel meeting. The travelling time also gives time to familiarize to results of preliminary reviews. The face to face meetings have better dynamics, in particular with meeting that last full day or two. Only small part of the panelists had access to video meeting facilities, but timing and reserving the facilities for a full day or two would have caused problems. In addition, the use of such equipment is expensive and could in some cases be more than the travel and hotel costs. In sum, panel meetings as they are organized now, were seen cost effective when considering the amount of funding available. The possible savings would be small. Most importantly, there was a fear that the quality of the evaluation would suffer, lowering the changes for fair evaluation. Keywords: Peer review; Panel meeting;

P-015

P 015

Addressing the challenges of evaluating a free, multilingual online training course aimed at professionals in low-and-middle income countries
M. Spires 1
1

Johns Hopkins Bloomberg School of Public Health, Institute for Global Tobacco Control, Baltimore, USA

Since 2006 Learning from the Experts, an online training course offered by the Johns Hopkins Bloomberg School of Public Health, has provided free tobacco control training to policy makers, researchers, educators and the general public in 175 countries. The training is offered in the six official United Nations languages and consists of 10 modules covering multiple aspects of tobacco control. The training aims to help participants advocate for, develop, and implement effective tobacco control interventions in their jurisdictions. Because of the diverse nature of the training participants (professions, languages, geographic regions, etc.) and the broad array of tobacco control topics, this training offers some unique evaluation challenges. An evaluation was conducted to assess the effectiveness of Learning from the Experts by addressing the following questions: (1) How long does it take users to complete a module and/or the entire training? (2) How effective is the training in meeting the needs of the users? (3) How can the content of the training be improved upon? (4) How can the delivery of the training be improved upon? and, (5) How can the number of registered participants who complete the training be increased? To address these evaluation questions, we identified participants who enrolled in the training course during a predetermined eight month period of time. The participants who enrolled during this period totaled 439, representing 62 countries and 29 languages. These participants were categorized into three groups based on how far they had progressed through the training: Group A consisted of those who enrolled during the study inclusion time period and who did not start the training within two months of enrollment; Group B consisted of those who had registered and started the modules, had not yet finished the entire training, and had been inactive on the site for at least two months; and group C consisted of those who had completed the entire training. Each participant received a survey appropriate for their group, i.e. their level of progression through the training, in the language in which they took the training. The survey asked for input that would help answer the studys evaluation questions. Additionally, reports from the online training website were run using the sites analytic capabilities and analyzed to investigate how long it takes users to complete each module and the entire training. Results from module specific evaluations completed by training participants were also downloaded from the training website and analyzed to assist in the evaluation. This presentation will share results from this evaluation which serves as a model for similar efforts in evaluating multifaceted online learning mechanisms. Keywords: Online learning; Tobacco control; Evaluation; Low and middle income countries;

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

241

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

P-016

P 016

A participatory approach for evaluating hospitals humanization in Italy: the challenge of a common framework for twenty-one regional health systems
A. Tanese 1, R. Metastasio 1
1

Cittadinanzattiva, Agency of Civic Evaluation, Roma, Italy

The poster describes the experimentation of an evaluating methodology built from the civic point of view for carrying out a participatory process aimed at assessing humanization in all italian hospitals. The project is deemed interesting not only for the methodology adopted, but also because of the goal, that is implementing a common process within 21 regional health systems. This requires the creation of a vertical network among the different institutional levels involved (Ministry, Regions, Hospitals) and of an horizontal one among public health authorities and civic organizations. The first part of the poster focuses on the general framework in which the project is set, clarifying the approach and the innovative nature of the project. Basing on the decennial experience in the field of civic evaluation methods, particularly through the Civic Audit program (presented in EES Conference 2010), Cittadinanzattiva, through its Agency for Civic Evaluation, accepted the proposal of collaboration by the Italian National Agency for Regional Health Systems to elaborate and experiment a national program to evaluate the degree of humanization in Italian hospitals.. The second section explores in details the method and the practice of the evaluation process adopted (questions, actors, tools, citizens participation steps). Synthetic information are given about the concept of humanization in healthcare sector, the elements investigated through 140 indicators, the numbers of the hospitals evaluated, the Civic Organizations involved and the time schedules of the project. Some peculiar characteristics of the project represent the added value. Firstly the filter of a civic perspective in the elaboration of the evaluation scheme and in the concrete selection/formulation of indicators; secondly the attempt of decomposing the concept of humanization in elements to be assessed through a quantitative method; thirdly the involvement of all Regions that accepted to share the same evaluating instruments, even if working in a condition of autonomy; finally the real participation of citizens, which takes place not only in the gathering of the data, but in all the phases of the process, including: preparation of the activities, discussion, validation, and interpretation of the results, critical review of the instruments experimented and sharing proposals for improving quality of health services. The third part of the poster is dedicated at the early reflections about the impacts that such a experimentation can produce at a local level, while considering difficulties and opportunities of a process for involving Local Health Authorities and citizens together. The challenge consists in implementing a national system for evaluating the degree of hospitals humanization which bases, at the same time, on a common scheme containing comparable indicators and on participatory evaluating processes defined at a local level, in a flexible way. A strategy which combines elements of methodological strictness with the capacity of adaptation to the contest. The only way for making the evaluation effective within a health system characterized by high levels of autonomy of the single Regions. Given the experimental nature of the project presented, sharing reflections and exciting comments are also objectives of the participation in the Conference. Keywords: Civic evaluation; Hospitals humanization; Participatory assessment process;

P-017

P 017

Impact of HIV health education on knowledge and attitude of migrant workers and their family members in Armenia
Kristine Ter-Abrahamayan 1
1

World Vision, Armenia

I am working as a Senior Design, Monitoring and Evaluation Officer at World Vision Armenia. I have Master in Sociology and it is almost 10 years that I am involved in development work. I have had the chance to encounter active role in program to policy evaluations and sustainable issues in Armenia. Besides, I am a member of International Program Evaluation Network and not only participated in many evaluation related trainings and conferences, but also had a chance to lead evaluation related trainings in Armenia. Background: According to national statistics a vast majority of HIV cases registered in Armenia during last five years were among migrant workers. Labor migrants often practice high-risk behavior due to lack of contact with family, access to SW and low awareness of risk of HIV. To address mobility exacerbated HIV Word Vision Armenia in 2008 initiated a project of 3-years-duration in five selected communities that traditionally used to have high seasonal migration rate. Communities in project area included migrant workers, family members and schoolchildren who were provided intensive health education and information on HIV & AIDS using various channels of communication from March 2008 till February 2011. Three communities were selected in the same region with similar migration and demographic profile to serve as a control, where no specific HIV related intervention was conducted by any agency. Methodology: A quasi-experimental design was applied to evaluate the impact of program. The level of HIV knowledge and attitude among the migrants, their family members was measured in intervention versus control communities at the end of the project. Data was collected using structured self-administered questionnaire. Results: At baseline, intervention and control arms were identical by socio-demographic characteristics, HIV knowledge and attitude. At post-test both adult respondents in the intervention arm (n = 221) compared to controls (n = 217) and youth (n = 94 and 62 correspondingly) were more likely to be tolerant of people living with HIV (58.8% in intervention v.s.14.4% in control), (p < 0.001). Youth of intervention arm exhibited better HIV comprehensive knowledge as well (41.5 % v.s. 9.7 %) (p = 0.001). However at post-test the level of comprehensive knowledge was very low in adult population in both arms: 17.2 % in intervention communities and 16.6 % in control communities indicating no significant difference (p = 0.866).
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

242

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

Conclusion: Project was more efficient in changing adult population attitude towards PLWHAs rather than knowledge. On the other hand youth population appeared to be more efficient in terms of knowledge change compared to adults and thus young people might be considered as low hanging fruit in prospective community based interventions. Keywords: Quasi-experimental; HIV; Impact; Attitude change; Knowledge change;

P-018

P 018

Characterising the EPODE Logic Model: Unraveling the past and informing the future
M. Van Koperen 1, T. Visscher 2, C. Summerbell 3, S. Jebb 4, M. Romon 5, J. M. Borys 6, J. Seidell 1
1 2 3

VU University, Health Sciences, Amsterdam, Netherlands Windesheim University, Research Centre for the Prevention of Overweight Zwolle, Zwolle, Netherlands Durham University, School of Medicine and Health, Durham, United Kingdom 4 MRC Human Nutrition Research, Elsie Widdowson Laboratory, Cambridge, United Kingdom 5 Lille 2 University Hospital, Faculty de Medicine, Lille, France 6 Proteines, Paris, France

Context: EPODE (Ensemble Prvenons lObsit De Enfants or Together lets Prevent Childhood Obesity) is a community-wide multi-level intervention approach to implement effective and sustainable strategies to prevent childhood obesity. What once started as a small community-based intervention evolved through local enthusiasm, practical expertise and scientific knowledge into a large-scale and central coordinated approach. Since 2004, EPODE has been implemented in over 500 communities in six countries in Europe and is now being implemented in communities in Australia, Mexico, and Canada. Evaluation of the community programs has been centrally coordinated and mostly outcome focused. The objective of this study is to gain insight in the key-elements of EPODE and to represent these in a schematic model supportive to program evaluation. Methods: Over 50 EPODE documents were collected. Among these were process manuals, press releases, project plans, toolkits, website pages and one scientific article. Subsequently semi-structured interviews were held with a) nine local project managers and governmental representatives involved in the planning and delivery of EPODE programs in four French commuities and b) with three national coordinators. Data was analysed qualitatively and placed in a multi-level logic model. With input from international experts and national coordinators this was scaled down to a linear overarching logic model covering EPODE principles and key-elements. Findings: The primary outcome of all EPODE programs is a healthy weight for children, which is expected to be achieved by promoting healthy eating and physical activity. Activities to stimulate healthy food intake and physical activity are implemented in all EPODE communities. Other activities carried out include advocacy and community capacity building activities aimed at changing social and physical environment of the child. Program input consists of materials and training provided by central coordination, and the establishment of the local organisation. Emphasis is placed upon four essential program principles: gain political commitment, set up public-private partnerships, use of social marketing techniques and monitoring and evaluation. However the completion of these key-principles and the offered activities differ per community and seem to depend on local needs and the available resources. Conclusions: Although EPODE seeks uniformity in design and implementation of community-wide programmes, local implementation and evaluation depend on the needs of the stakeholders and the availability of resources. The here constructed model therefore can be considered an overarching logic model describing the key-elements of the approach. By offering this overarching logic model, local needs and resources are valued and program management remains flexible in local program refinement without loosing sight of program principles. The EPODE-model can be used a) to support future implementation of EPODE in different communities and to guide the construction of a local tailored logic model and b) as a tool for engagement of stakeholders by visualising where and why involvement is important and c) to guide the construction of an evaluation framework in which named elements can be discussed as key-indicators of program development and success. Keywords: Multi-level; Health; Community-based; Logic model; Qualitative research;

PS5 Strand 5

Poster session

P-019

P 019

From supervisor to evaluator. Change in the way of perception of the school inspectors role in Poland.
J. Kolodziejczyk 1
1

Jagiellonian University, Institute of Public Affairs, Krakw, Poland

In 2009 the reform of pedagogic supervision in schools was carried out. As a consequence of this change, the external evaluation became one of the essential instruments of examination of the quality of school work. External evaluation is carried out by those who by now exercised supervision based on control of the school work in compliance with the law. Implementation of the external evaluation created the necessity of changing the attitude from the supervisor to the evaluator/researcher, that seems to be one of the basic elements of the successfull introduction of the reform. On the poster we will present the results of the research concerning the change of perception of the evaluators new role by teachers and school principals as well as school inspectors. Keywords: Professional role of the evaluator; External evaluation;
W W W. E U R O P E A N E VA L U AT I O N . O R G ABSTRACT BOOK

243

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

P-020

P 020

The Use and Impact of Evaluation in Government Programmes: the tourism governance perspective
M. A. Seabi 1
1

National Department of Tourism, Policy Development & Evaluation, Pretoria, Republic of South Africa

The value add of general evaluation practices is also significant in governments governance and intervention process. Evaluation assist government to assess the effectiveness of its interventions, whether there is value for money from the interventions and thus, whether programmes or interventions should be continued and/ or funds should be withdrawn, maintained or increased. Tourism sector has become a critical economic contributor worldwide as a sector that is labour intensive, with wide business opportunities and a driver of transformation of the standards of living. The different spheres have varied responsibilities along the tourism value chain, at national level, and thus it is important to evaluate the performance, mainly of interventions operational at local government through state agencies and directly funded programmes. These interventions include skills development and entrepreneur support. The study indicates that, due to its increased economic status of tourism sector across world economies, frequent evaluation of various interventions is needed to ensure success of programmes and accelerated transformation. In this study a scenario approach of evaluating government intervention in this sector was followed and four scenarios were arrived at. These scenarios depicts a status that a government and its stakeholders will not wish to maintain as it is not transformational and developmental in nature it is like taking one step forward and three steps back, and that is the no gain scenario; the plausible scenario is that which there is common buy-in, commitments and inclusive participation within diverse groups, and it begins to give hope to good things (i.e. better life for all public service goal); the third, promise land is what each country across the globe aspires for in terms of attracting the high spending tourists in their high numbers on a sustainable level to contribute to increased tourism contribution to GDP % and job creation; and the no go zone scenario is a state of failure where country with potential cannot leverage on tourism economic due to less or no investment by both private and public sector. The paper presents the economies of tourism of at global level, competitors and South Africa in relation to number of tourists over the past five years as context and give also projected potential for the next few years in line with the scenarios. Keywords: Programmes; Interventions; Evaluation; Scenarios; Governance; Tourism economy; Transformation

P-021

P 021

The relational quality evaluation: how social services change. The case of aneuromotor disability center
M. Moscatelli 1
1

Italy

The work relates to evaluation of relational quality of the services offered by Foundation Ariels childhood neuromotor disability Center. The objectives are the methodological and theoretical study of the relational reflexive evaluation approach, with attention to the connections between evaluation and organizational dimensions services to individuals and families. This multidimensional and multi-vision quality model refers to some macro-organizational dimensions of relational well-being generated by a social service: efficiency, effectiveness, quality integration, the quality of ethical purposes. This reflective and participatory evaluation perspective is an opportunity to capture, describe and assess the common good relationship generated by a service to individuals and families, which are strategic to the familiarization and customization in a context of changing social needs. In addition relational quality evaluation pays particular attention to the transformative and morphogenetic potential of evaluation. Methodologically, the analysis was quanto-qualitative. Semi-structured interviews were conducted with operators of the Centre and a detailed analysis of the documentation has been done. The work led to the construction of a questionnaire of 35 variables, which 167 beneficiary families have responded to. Besides monovariata analysis of the results of the questionnaire, some synthetic indexes of some critical dimensions of relational quality were constructed

W W W. E U R O P E A N E VA L U AT I O N . O R G

ABSTRACT BOOK

244

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

List of Speakers
A
Abdelhamid, D. M. Abma, T. Abokyi, S. A. Achonu, A. Adesoba, T. P. Aguilar, M. J. Ahonen, P. Ajaari, J. Alain, M. Albrecht, M. Ali, M. Alvarez, P. Alvira, F. Al-ZouBi, L. Amanatidou, E. Amoatey, C. Andersson, J. Andrade, E. Andreo, P. Anttila, H. Antunes, P. Ardenfors, M. Ariton, V. Askim, J. Atmavilas, Y. Autio, A. Awasthi, I. C. Azzam, T. 28 163 39 212 210 114 127 39, 160 142 37 160 148 114 194 49 164 116 35 122 234 149 154 216 5 44, 75 234 55 9, 156 Bohm, J. Bohni Nielsen, S. Bojsen, D. S. Bopanna, K. Borek, A. Borys, J. M. Bosse, C. Btel, A. Boutylkova, E. Brander, S. Brans, M. Briton, D. Brousselle, A. Brozaitis, H. Brunnhuber, U. H. Brusset, E. Bugnion De Moreta, C. Burban, F. Bussmann, W. Bustelo, M. Butter, J. Byeon, S. C. Bylin, S. 11 95, 112 228 105 101 243 202 138 164 27 31 202 29 177 78 13 85, 85 151 203 28, 47, 57 165 83 146

D
Da Re, R. Dabelstein, N. Dahler-Larsen, P. Das, P. Dauwerse, L. Davies, I. Davies, R. De Alteriis, M. De Groot, D. De Kemp, A. De Laat, B. De Lancer Julnes, P. De Peuter, B. De Schepper, G. De Smet, L. De Wal, M. Denis, J. L. Dente, B. Denvall, V. Dewachter, S. Dijkstra, G. Dillon, B. Dimoulas, K. Dodd, J. Doeving, E. Donaldson, S. Donlevy, V. Doucette, A. Druet, C. Duckworth, S. Duggan, C. Duque, A. Duranceau, M. F. Dyrkorn, K. 183 197 33, 133, ,157, 174 80 163 121, 157, 194 69, 87 33 80 99 225 95 31 199, 216 224 31 29, 106, 144, 195 62 89 176 99 98 50 225 5 109, 156, 174 225 14 106, 195 203 126, 221 35 106, 144, 195 128

C
Cabezas, E. Callerstig, A. C. Cancedda, A. Canoy, M. Cardona, A. Caspari, A. Castro, V. Cesaro, L. Chambille, K. Champagne, F. Chaudhary, M. Cheng, A. L. P. Christiansen, L. M. Cirillo, C. Clerckx, E. Codern, N. Contandriopoulos, D. Cooksy, L. J. Coryn, C. Costandache, A. Coulibaly, M. Coutinho, T. Crespo, R. Cruz, M. M. Cummings, H. Cummings, R. Cunha, C. L. Cunningham, S. 236 148 225 225 34 103, 179 35 183 80 29 13 113, 135 228 127 119 34 29, 29 77 88 27 50 35 34 35 177 19 35 9

B
Bakkali, A. Balas, G. Baoy, R. Bara Bresolin, A. Barboza, M. Barbulescu, I. G. Barr, J. Batliwala, S. Bayley, J. S. Becquart, A. Befani, B. Belen, S. Bell, P. Bensch, G. Berlinska, J. Bertolini, L. Betrisey, D. Beukers, E. Biria, D. Birolo, L. Blanco, F. Boaz, A. Bognar, F. 70 71 61 36 36 186 155 28 19 27 69, 87, 140, 183 110 17 16 6 229 114 229 12 183 114 226 71

E
Ebling, G. Echavez, C. R. Eerikainen, J. Elbe, J. Elbe, S. Elkins, C. Ellis Ruano, G. Ellis, R. Eriksson, L. Es, Y. Espinosa, J. Etherington, A. Etta, F. 6 104 181 37 37 208, 222 92, 111 40 224 80 20, 187 88 15, 188

F
Faulkner, W. Felloni, F.
ABSTRACT BOOK

172 65

W W W. E U R O P E A N E VA L U AT I O N . O R G

245

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

Ferreira, A. Fischl, I. Flanagan, A. E. Forss, K. Foster, E. M. Franco, S. Frey, K. Furubo, J.

151 230 8 41, 69, 87, 116, 157 191 187 144 207, 215

Hobson, K. Hogard, E. Hojlund, S. Holmstrm, M. Holroyd, G. Holvoet, N. Horga, I. Horodyska, M. Hummelbrunner, R. Huyse, H. Hyytinen, K.

88 39, 40 158 154 66 176, 195 62 240 41, 205 80, 164 123, 178

Khan, U. Khanna, R. Khodko, N. Kim, S. J. Kind, S. Kinda, D. Kinda, O. Kirkpatrick, R. Kitchener, M. Kivipelto, M. Kliest, T. Ko, Y. S. Koirala, B. K. Kolodziejczyk, J. Kontinen, T. Korhonen, N. Kort, M. Kosheleva, N. Koskela, V. Kouakou, K. S. A. Krapp, S. Kravchuk, I. Krueger, S. Krupnik, S. Kuboja, N. M. Kuehnemund, M. Kuji-Shikatani, K. Kumar, A.

30 75 168 83 129 211 212 192 138 117 197 83 92 243 217 207 31, 117 82 100 50 166, 218 168 22 158 211 119 94 172

G
Gaaff, A. Gabriele, B. Gaffey, V. Gaithi, L. M. Garau, G. Garcia, M. Gargani, J. Gassler, H. Georgieva, E. Gervais, M. Gildemyn, M. Giordano, B. Godinho, M. C. R. Goetsch, E. Gopal, S. Goroshko, A. Goyal, R. S. Greene, J. Gregorowski, R. Grydeland, M. Gueye, E. Guijt, I. Guitard, A. Gutheil, M. Gutierrez, J. P. Guzmn, D. 165 46 131 173 42 74 101, 174 230 53, 122, 200 238, 238 97 108 232, 235 96 200 168, 239 13 112 155 161 66 80 219 30 231 239, 240

I
Iddrisu, Z. Illman, S. Immonen, S. Inberg, L. Ion, O. A. Irving, P. Issaka Herman, T. Istenic Starcic, A. Ives, C. 160 241 77, 121 195 186 20 104 115 202

J
Jaboma, A. Jacob, S. Jacobsen, G. Jacobson, T. Jansson, P. Januszkiewicz, A. Jrvinen, T. Jean, M. C. Jebb, S. Jeffrey, P. Jobin, L. Johnsen, A. Jonas, M. Joppert, M. P. 211 207 160 124 146 168 217 175 243 225 106, 195 5, 95 191 81 98 33 228 124

L
La Rovere, R. Laakso, S. Lahdelma, T. Lahera, A. Lahey, R. Lahteenmaki-Smith, K. Laitinien, I. Lamari, M. Lao, P. A. Latikka, J. Lavizzari, L. Lawrenz, F. Leah Wilfreda, P. Leach, B. Leal, A. Lee, H. Leeuw, F. Leeuw, F. L. Lehtonen, M. Lemire, S. Lennie, J. Lienart, L. Ligero, J. A. Lindgren, L. Lindholm, K. Ling, A. Lisack, G. Liu, L.
ABSTRACT BOOK

170 232 232 114 95 124 47 175 96, 230 38, 241 65 234 91 62, 73, 152 35 151 10 109, 215 181 10, 153 92 213 187 163 148 42 140 184

H
Hakap, J. Hallin, G. Hamel, S. Hametner, M. Hanberger, A. Hanemaayer, D. Hansson, F. Hartwig, F. Hatry, H. P. Hawkins, P. Hay, K. Hearn, S. Heider, C. Heirman, J. Hejgaard, T. Hellstern, G. M. Helming, K. Henry, G. Henttinen, A. Herczeg, B. Hermans, L. 241 191 142 145 163 5 59 122 95 156 28, 221 134 192, 214 42 112 81 49 8, 109 58 71 9

Julie, P. Julnes, G. Juravle, C. Jussila, J.

K
Kaabunga, E. Kachero, B. Kalpazidou Schmidt, E. Kalyta, A. Kandamuthan, S. Karinen, R. Karjalainen, P. Kser-Erdtracht, J. Kasprzak, T. Kazi, M. Keene, M. Kempinsky, P. Khadr, A. Khan, F. Khan, N. A. 15 104 49 168 196 207 117 138 101 34 224 190 172 231 219

W W W. E U R O P E A N E VA L U AT I O N . O R G

246

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

Loikkanen, T. Lomena-Gelis, M. Loureiro, A. Loveridge, D. Luli, F. Lumino, R. Lundgren, H. E. Luo, L. P. Lusthaus, C. Lth, K.

178 102 35 130 209 79, 145 58, 110, 193 184 121 140

Monsen, L. Monteiro, S. Moreira Dos Santos, E. Morell, J. Moro, G. Morra Imas, L. Morrow, N. Moscatelli, M. Moss, M. Mouafo, D. Mouque, D. Mouqu, D. Mugerwa, F. Mukhebi, D. Mukherjee, V. Mukuna, V. Mulholland, C. Mull, S. Muller, P. Mun, M. Mura, M. Murredi, T. Mustafa, G. Muthoo, A.

223 149 35 213 60 175 106, 205 244 204 90 122 166 128 198 75 198 116 126 204 151 169 24 24 65

P
Padilla, P. Palenberg, M. Palyvoda, L. Parry, A. Parry-Crooke, G. Pavlovaite, I. Pelkonen, A. Pena Reyes, M. E. Penaloza, R. Pennington, M. Pennisi, A. Pereira, C. Perez Yarahuan, G. Perrin, B. Pesce, F. Pesonen, P. Peters, J. Peters, M. Pfeffer, M. Pibilova, I. Picciotto, B. Piirainen, K. A. Pilkaite, A. Pinto, D. Y. Pletschette, M. Poate, D. Podems, D. Pokorski, J. Polastro, R. Pollermann, K. Pradhan, A. Premakanthan, S. Pretorius, J. Prval, J. 189 67 168 73 26 70, 228 178 239, 240 74 193 202 232, 235 50 41, 58, 133 62 83, 84 16 225 17 11 214 207 170 42 237 214 20 189 22, 23, 65 124 75 52, 68, 150, 237 200 106

M
Maarse, A. Mackellar, L. Macpherson, N. Magheru, M. Maier, L. Major, K. Mantouvalou, K. Maria, B. Mariussen, A. O. Marjnovity, A. Marra, M. Martinez, A. Martinuzzi, A. Masanja, H. Mateu, P. Matturi, K. Matute, J. A. Maurer, M. Mayne, J. Mazzeo Rinaldi, F. McGuire, M. McKenzie, A. McNulty, J. Meer van der, F. B. Melloni, E. Mendez, E. Mentz, M. Metastasio, R. Mettepenningen, E. Meyer, W. Michaelis, C. Mickwitz, P. Midtkandal, I. Mihalache, R. Mikhailova Ph.D., L. Mikos, M. Milet, H. Mineo, B. Mineur, E. Miriam, J. Mishra, A. Mistry, S. Mitxelena, C. Mizerek, H. Mock, N. Mohammed, E. 164 151 137 141 122 71 20 110 146 59 41 48 145 39 88 106 239, 240 185 69, 87 45, 152 94 176 30 117 62 75 198 242 183 37, 47 62, 73, 152 224 146 170 77 115 140 236 146 9 105 137 114 139 106 126

N
Naidoo, I. Nakrosis, V. Nanda, P. Nelson, J. Neuhaus, B. Ngonyani, S. Nicewinter, J. P. Nielsen, S. B. Nieminen, M. Niklasson, L. Nilsen, K. Noordeloos, M. Nordesj, K. Norn, M. T. Nurmi-Koikkalainen, P. Nuzzaci, A. Nyabade, G. 170, 214 6 105 137 140 211 96, 230 228 123 120 96, 230 198 226 59 234 103 196

Q
Qudisat, R. Quested, T. 194 73

R
Raivio, T. Rakhmatullin, R. Rkklinen, M. Ramage, I. Ramage, K. Raue, P. Reddy, S. Reichborn-Kjennerud, K. Reidl, S. Renders, C. Renger, R. Reynolds, M. Ricci, A. Rickli, M. Rider Smith, D. Rigout, F. Rich, M.
ABSTRACT BOOK

84 146 114 96, 230 96, 230 124 28 32 230 53 46 41, 91 49 179 66 127 130

O
Ofir, Z. Ohler, F. Ojukwu, M. O. Okojie, M. U. Oksanen, T. ONeil, G. Oosi, O. Oturu, M. Owen, J. Owusu-Agyei, S. Oyinloye, O. Ozar, K. Ozymok, I. 126, 143, 162, 198 48 147 185 123, 123 64, 102, 213 207 206 19, 43 39 187 160 168

W W W. E U R O P E A N E VA L U AT I O N . O R G

247

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

Rijneveld, W. Rillo Otero, M. Rios, A. M. Rodriguez-Bilella, P. Rogers, P. Rohmer, B. Romon, M. Rotondo, E. Rbke, C. Ruddy, A. Rugh, J. Ruiz, K. Rutten, R. Ruvalcaba, A.

11, 80 36 74 227 134 119 243 28 138 204 57, 162 8 80 231

S
Sabharwal, N. S. Sagmeister, E. Saikkonen, P. Salimova, B. Sandberg, B. Sanopoulos, A. Santos, R. Sanz, B. Sardeshpande, N. Saunders, M. Schetinina, O. Schirru, L. Schnaut, G. Schroeter, D. Schuit, J. Schwandt, T. Schwartz, R. Seabi, M. A. Secco, L. Segerholm, C. Segone, M. Seidell, J. Sen, A. Servoll, E. Sette, C. Sever, R. Sharma, S. Sheikh, S. Shori, R. Sibanda, A. Siegel, S. Silva, P. Silvestrini, S. Silvia, V. Simons, H. Simwa, V. Singh, R. Singh, S. Sirtori, E. Skov, M. Slinger, J. Smith, G. Smith, L. 44 22 89 176 146 73 45, 162 28, 44, 193 75 57 168 42 124 88 53 33, 126 153 244 183 174 28, 110, 156 53, 243 64 161 134 134 75 230 54 15 233 155 166, 179 98 47, 133 16 137 88 98 228 9 132 203

Smits, P. 106, 118, 144, 195 Soares, A. R. 170 Soberon Alvarez, L. 209 Sderberg, S. 146 Soumahoro, F. 50 Spano, A. 45 Speer, S. 207, 215 Spires, M. 241 Stadter, C. 144 Stahl, L. 154 Stake, R. 157 Stame, N. 19, 47, 58, 69, 87, 133, 157 Stanculescu, M.S. 141 Stern, E. 41, 69, 87, 157, 214 Steurs, G. 189 Stevens, K. 134 Stockmann, R. 47, 179 Stoliarenko, K. 168 Strong, M. 101 Sudarshan, R. 44, 75 Summerbell, C. 243 Svoboda, D. 11, 76

Valosaari, K. Valsamis, D. Van Bunnen, P. van den Berg, R. D. van der Borg, W. Van Der Meer, F. B. Van Dinter, N. Van Koperen, M. Van Nuland, E. Van Ongevalle, J. van Overbeke, M. Van Soetendael, M. van Twist, M. Van Zorge, R. Varela, O. Vasilescu, C. Vassimon, M. Vedung, E. Vela, C. Velazquez, C. Verdonk, P. Verlet, D. Viljamaa, K. Visscher, T. Vogel, I. Von Der Mosel, K. Vougioukalou, S. Vyamana, V. G.

241 78 108 214 163 31 160 53, 243 225 164 108 55 31 80 100 62, 237 35 109, 157 108, 182 114 163 199, 216 84, 232 243 143 218 226 211

T
Tacchi, J. Tagle, L. Tamondong, S. D. Tanese, A. Tapella, E. Tarnay, V. Tarsilla, M. Te Brmmelstroet, M. Teigland, G. Teisen, M. Teixeira, P. Temmink, C. Ter-Abrahamyan, K. Tetenyi, T. Thao, M. Tharanga, G. Toderas, N. Trnroos, J. Toulemonde, J. Tourres, E. Traoret, I. Trejo Valdivia, B. Tsapogas, J. Tsygankov, D. Tvrdonova, J. Twist van, M. 92 60, 202 25 242 131, 227 168 162 229 161 199 149 164 242 71 234 141 186 241 10, 26 151 156 239 48 47, 72 55, 73 117

W
Walia, S. Walloth, C. Wally, N. Walsh, P. Walton, M. Washington-Sow, L. Weber, T. Wegner, S. Weiner, R. Weremiuk, A. Widmer, T. Williams, A. Williams, H. Wimmer, J. Windau, B. Wu, W. Y. C. 105 233 162 39 222 160 70, 228 8 39 159 144 204 135 55 174 135

Y
Yakeu Djiam, S. E. Yaron, G. Yong-Protzel, I. York, N. 129 155 193 110

U
Uitto, J. Umi, H. Uusikyla, P. Uzunkaya, M. 27, 170, 182 56 233 51, 166

Z
Zaal, F. Zaveri, S. Zelaya, J. E. Zhang, B. Zintl, M.
ABSTRACT BOOK

11 221 239 153 67

V
Vad, T. Vahlhaus, M. 59 179

W W W. E U R O P E A N E VA L U AT I O N . O R G

248

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

List of Keywords
10th order performance results structure . . . . . . 237

A
Abstract . . . . . . 206 Abuja . . . . . . 185 Accountability . . . . . . 25, 19, 50, 65, 85, 127, 138, 158, 181 Accountability principle . . . . . . 186 Accounting . . . . . . 165 Adaptation . . . . . . 212, 16 Adaptive systems . . . . . . 45 Added value . . . . . . 161 Additionality . . . . . . 189 Administrative capacity building . . . . . . 6 Administrative databases . . . . . . 8 Adult education . . . . . . 92 Adult social work . . . . . . 89 Africa . . . . . . 212 African Union . . . . . . 187 Agency . . . . . . 127 Agricultural research . . . . . . 77 Agriculture . . . . . . 212, 155 Agriculture Recovery . . . . . . 231 Aid . . . . . . 176 Aid Effectiveness . . . . . . 210 Albania . . . . . . 209 Alignment . . . . . . 164 Alternative explanation . . . . . . 153 Arab spring . . . . . . 70 Art evaluation methods . . . . . . 81 Art festivals . . . . . . 81 Asian Cities Climate Change Resilience Network (ACCCRN) . . . . . . 17 Assesment of primary health services . . . . . . 50 Attitude change . . . . . . 242 Attribution . . . . . . 45, 134 Audit institutions . . . . . . 117 Audit tool . . . . . . 39

B
Basic research . . . . . . 146 Best practices . . . . . . 50 Biodiversity Conservation . . . . . . 108, 182 BSC performance management system . . . . . . 151 Budget . . . . . . 165 Budget support . . . . . . 99 Budgeting . . . . . . 164 Buffer zone . . . . . . 239 Building Ownership for Implementing Evaluation Recommendations . . . . . . 65

C
Campaigns . . . . . . 102 Capacities . . . . . . 219 Capacity . . . . . . 200 Capacity Building . . . . . . 66, 196 Capacity development . . . . . . 179, 210 Capacity strengthening . . . . . . 198 Capacity-building . . . . . . 81 case studies . . . . . . 98, 119 Case study . . . . . . 227 Casual mechanisms . . . . . . 79 Causal attribution . . . . . . 134 Causality . . . . . . 10 Central America . . . . . . 218
W W W. E U R O P E A N E VA L U AT I O N . O R G

Certified Coffee . . . . . . 108 Challenges . . . . . . 16, 98 Civic evaluation . . . . . . 242 Civil society . . . . . . 80 Civil Society Organizations . . . . . . 97 Client relationship . . . . . . 30 Climate . . . . . . 155 Climate Change . . . . . . 212, 16, 17, 17 Cloud Computing . . . . . . 135 Cluster . . . . . . 129 Cluster policy . . . . . . 129 Coaching . . . . . . 194 Coastal policy . . . . . . 9 Code of conduct . . . . . . 47 Cohesion policy . . . . . . 62, 120, 130, 166, 232 Collaboration . . . . . . 204 Collaboration with experts . . . . . . 160 Collaborative audit approach . . . . . . 123 Collaborative Evaluation . . . . . . 75 Collaborative governance . . . . . . 72 Collaborative learning . . . . . . 64 Colombian health system . . . . . . 74 Commercial development . . . . . . 199 Communication . . . . . . 102 Communication and trust intervention . . . . . . 229 Communications . . . . . . 92, 111 Community . . . . . . 25 Community of practice . . . . . . 238 Community-based . . . . . . 53, 243 Comparative analysis . . . . . . 89, 236 Comparative Case Studies . . . . . . 108 Comparison group . . . . . . 9 Comparison groups . . . . . . 177 Competences . . . . . . 115 Competition between schools . . . . . . 223 Competitiveness . . . . . . 113 Complex . . . . . . 46 Complex interventions . . . . . . 134 Complexity . . . . . . 41, 119, 153, 164, 172, 205, 224, 227 Complexity theory . . . . . . 13, 132, 222 Comprehensive evaluation . . . . . . 34 Conclusions . . . . . . 206 Conditional Cash Transfers . . . . . . 172 Conference evaluation . . . . . . 213 Conflict evaluation . . . . . . 13, 221 Conflict Prevention . . . . . . 119 Connectivity . . . . . . 52 Consulting approach . . . . . . 30 Contemporary art . . . . . . 81 Context . . . . . . 184 Contexts . . . . . . 34 Contribution . . . . . . 90 Contribution analysis . . . . . . 13, 73, 143, 153, 153, 154, 198 Control . . . . . . 32 Corporate social responsibility . . . . . . 236 Cost benefit analysis . . . . . . 228, 228 Cost benefit analysis as a learning tool . . . . . . 229 Cost benefit analysis process . . . . . . 229 Counterfactual . . . . . . 166 Countries in transition . . . . . . 72 Creative evaluation. . . . . . . 134 Creative method . . . . . . 100 Credentialing . . . . . . 94
ABSTRACT BOOK

249

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

Credible evidence . . . . . . 112 Crisis . . . . . . 215 Crisis response . . . . . . 172 Criteria . . . . . . 89 Critical systems heuristics . . . . . . 91 Cross cultural evaluation . . . . . . 197 Cross-border programmes . . . . . . 62 Crowdsourcing . . . . . . 9, 11 CSO development effectiveness . . . . . . 11 Culture . . . . . . 30, 149 Customer satisfaction . . . . . . 85 Czech Republic . . . . . . 39

D
Data protection . . . . . . 203 Decentralisation . . . . . . 219 Deductive analysis . . . . . . 22, 23 Democratic accountability . . . . . . 32 Democratic evaluation . . . . . . 112 Demonstrated competence . . . . . . 114 Design of Evaluation System . . . . . . 116 Design of evaluations . . . . . . 5 Developing Countries . . . . . . 50, 196, 230 Developing country . . . . . . 20 Development . . . . . . 90, 98, 101, 110, 188, 200 Development aid . . . . . . 58 Development assistance . . . . . . 25 Development communication programs . . . . . . 92 Development cooperation . . . . . . 218 Development co-operation . . . . . . 103, 217 Development Effectiveness . . . . . . 76 Development Evaluation . . . . . . 137, 187 Development evaluation in Kenya . . . . . . 173 Development intervention . . . . . . 129 Development NGOs . . . . . . 80 Development plans . . . . . . 104 Developmental Evaluation . . . . . . 43, 91 Developmental state . . . . . . 191 Deviations . . . . . . 160 Different methods . . . . . . 66 Digital Data Gathering . . . . . . 106 Disability . . . . . . 234 Dissemination . . . . . . 12 Dissemination of evaluation results . . . . . . 140 Distance learning . . . . . . 26 Distance online university . . . . . . 202 Documentary analysis . . . . . . 195 Documentation . . . . . . 12 Domestic services . . . . . . 78 Downward accountability . . . . . . 11

E
Earthquake . . . . . . 24 Economic . . . . . . 6 Education . . . . . . 101, 128 Educational development . . . . . . 202 Effect . . . . . . 146 Effect analysis . . . . . . 228 Effectiveness . . . . . . 8, 13, 34, 70, 138, 211, 216 Effectiveness Evaluation . . . . . . 117 Efficiency . . . . . . 67, 199, 216 Efficiency and productivity . . . . . . 161 Efficiency evaluation . . . . . . 83 Elderly sector . . . . . . 163 Employment effects . . . . . . 71 Empowerment . . . . . . 101, 198, 217 Energy efficiency . . . . . . 17 Energy policy . . . . . . 17 Energy security . . . . . . 17
W W W. E U R O P E A N E VA L U AT I O N . O R G

Enterprise network . . . . . . 232 Environment . . . . . . 182 Environmental evaluation . . . . . . 224 Environmental research . . . . . . 49 Epidemiology . . . . . . 34 Equal interpretation . . . . . . 160 Equality . . . . . . 110 Equitable distribution of resources . . . . . . 8 Equity . . . . . . 91, 110, 141 Equity-based evaluation . . . . . . 221 ERDF and territorial cohesion . . . . . . 108 Ethical . . . . . . 25 Ethics . . . . . . 15, 47 Ethnographic evaluation . . . . . . 226 EU . . . . . . 6, 225, 226 EU Agency . . . . . . 240 EU Communication . . . . . . 138 EU requirements . . . . . . 169 Europe wide . . . . . . 92, 111 European . . . . . . 189 European Commission . . . . . . 92, 111 European Environmental Evaluators Network . . . . . . 224 European expertise . . . . . . 62 European region . . . . . . 146 European Regional Development Fund . . . . . . 62 European Research Area . . . . . . 145 European Social Fund . . . . . . 6 European Territorial Co-operation . . . . . . 62 European Union . . . . . . 119, 120, 237 Evaluating policy influencing . . . . . . 143 Evaluating Research . . . . . . 126 Evaluating research excellence . . . . . . 88 Evaluation . . . . . . 212, 12, 13, 20, 24, 32, 35, 40, 46, 48, 50, 53, 73, 78, 83, 98, 98, 99, 100, 109, 129, 138, 150, 152, 154, 164, 164, 182, 184, 185, 188, 200, 209, 210, 234, 241 Evaluation approaches . . . . . . 76 Evaluation as a Profession . . . . . . 202 Evaluation associations . . . . . . 162 Evaluation capacity . . . . . . 168 Evaluation capacity building . . . . . . 128, 202 Evaluation Capacity Development . . . . . . 156, 169, 218 Evaluation Criteria . . . . . . 113, 236 Evaluation culture . . . . . . 207, 207 Evaluation database . . . . . . 5 Evaluation design . . . . . . 134, 177, 240 Evaluation education and training . . . . . . 26 Evaluation ethics . . . . . . 141 Evaluation findings . . . . . . 64 Evaluation framework . . . . . . 53, 59, 64 Evaluation frameworks . . . . . . 88 Evaluation governance . . . . . . 163, 232, 235 Evaluation challenge . . . . . . 191 Evaluation challenges . . . . . . 134 Evaluation in education . . . . . . 115, 139 Evaluation in Government . . . . . . 196 Evaluation knowledge . . . . . . 103 Evaluation methodology . . . . . . 102 Evaluation methods . . . . . . 73, 134, 222 Evaluation methods and practice . . . . . . 103 Evaluation methods and practices . . . . . . 49 Evaluation model . . . . . . 112 Evaluation models . . . . . . 139 Evaluation network . . . . . . 82 Evaluation of education . . . . . . 114, 114, 114, 114, 114 Evaluation of faculty . . . . . . 114 Evaluation of information access . . . . . . 239 Evaluation of legislation . . . . . . 203 Evaluation of research infrastructure . . . . . . 48
ABSTRACT BOOK

250

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

Evaluation of sector policy programmes . . . . . . 185 Evaluation of the implementation of a political statement . . . . . . 197 Evaluation of the use of financial resources in health . . . . . . 74 Evaluation of training . . . . . . 175 Evaluation Power . . . . . . 52, 68 Evaluation practice . . . . . . 31, 85, 102 Evaluation practices . . . . . . 36 Evaluation presentation . . . . . . 64 Evaluation questions . . . . . . 205 Evaluation Recommendations . . . . . . 65 Evaluation reporting . . . . . . 64 Evaluation Results . . . . . . 196 Evaluation scheme of multi-phase . . . . . . 151 Evaluation system . . . . . . 163 Evaluation systems . . . . . . 207 Evaluation tensions . . . . . . 114 Evaluation theories . . . . . . 29 Evaluation through social media . . . . . . 175 Evaluation tools . . . . . . 233 Evaluation Training . . . . . . 156, 208 Evaluation usage . . . . . . 64 Evaluation use . . . . . . 29, 118, 158, 170, 219 Evaluation utilization . . . . . . 144 Evaluations tools . . . . . . 236 Evaluation-synthesis . . . . . . 103 Evaluative process . . . . . . 121 Event evaluation . . . . . . 213 Evidence based decision making . . . . . . 144 Evidence based social dialogue . . . . . . 50 Evidence-based . . . . . . 189 Evidence-based policy . . . . . . 218 Evidence-based policy-making . . . . . . 144 Evidence-based programming . . . . . . 147 Ex ante evaluation . . . . . . 5 Ex-ante evaluation . . . . . . 166 Experience of presence . . . . . . 100 Experiment . . . . . . 144 Ex-Post Evaluation . . . . . . 236 Ex-Post Project Appraisal . . . . . . 166 External accountability . . . . . . 177 External and internal evaluation . . . . . . 101 External Evaluation . . . . . . 60, 243 External validity . . . . . . 71

Functions of evaluation . . . . . . 163 Funding decisions . . . . . . 38 Funding of enterprises . . . . . . 230 Future of Evaluation . . . . . . 174

G
Game theory . . . . . . 9 Games industry . . . . . . 83 Gender . . . . . . 20, 104, 187, 188 Gender and development approach . . . . . . 187 Gender budgeting . . . . . . 104 Gender discrimination . . . . . . 149 Gender equality . . . . . . 149 Gender-sensitive evaluation . . . . . . 20 General equilibrium model . . . . . . 71 Ghana . . . . . . 97 Global . . . . . . 172 Global health . . . . . . 238 Global loans . . . . . . 78 Global warming . . . . . . 17 Globalisation . . . . . . 70 Glocalized projects . . . . . . 134 Goal achievement . . . . . . 161 Goal attainment . . . . . . 5 Governance . . . . . . 187 Governance and support to Democratization . . . . . . 151 Governance through networks . . . . . . 37 Government . . . . . . 62, 152, 165 Government action . . . . . . 50 Grand challenges . . . . . . 49 Green jobs . . . . . . 166 Guidelines . . . . . . 15

H
Health . . . . . . 34, 53, 243 Health evaluation . . . . . . 106 Health impact evaluation . . . . . . 144, 195 Health policy . . . . . . 39 Health sector . . . . . . 195 Health service delivery . . . . . . 239 Health Systems Strengthening (HSS) . . . . . . 147 Healthcare . . . . . . 29, 230 Healthcare innovation . . . . . . 226 Higher education . . . . . . 186 High-frequency data . . . . . . 172 HIV . . . . . . 239, 242 HIV/AIDS . . . . . . 240 Homelessness . . . . . . 89 Horizontal objectives . . . . . . 191 Hospitals humanization . . . . . . 242 Human . . . . . . 188 Human and childrens rights . . . . . . 141 Human rights . . . . . . 110 Human Rights Evaluation . . . . . . 221 Human rights-based approach . . . . . . 187 Humanitarian . . . . . . 22 Humanitarian Aid . . . . . . 22, 23 Humanitarian Response . . . . . . 106, 205

F
Feedback . . . . . . 42 Fellowships . . . . . . 48 Feminist . . . . . . 20 Feminist Evaluation . . . . . . 75 Field of Evaluation . . . . . . 174 Field practice . . . . . . 222 Final evaluation . . . . . . 231 Finance . . . . . . 225 Financial and Economic Appraisal . . . . . . 51 Financial Appraisal . . . . . . 166 Financial intermediation . . . . . . 78 Financial resources for health . . . . . . 74 Findings . . . . . . 206 Food Safety . . . . . . 116 Food Security . . . . . . 205 Foresight . . . . . . 123, 178 Formative evaluation . . . . . . 131 Foundations . . . . . . 137 Framework loan . . . . . . 225 Framework Programmes . . . . . . 135 Frameworks . . . . . . 219 France . . . . . . 216 Freedom of Information . . . . . . 127
W W W. E U R O P E A N E VA L U AT I O N . O R G

I
ICT . . . . . . 200 ICT Penetration . . . . . . 135 Impact . . . . . . 25, 31, 73, 131, 152, 242 Impact analysis . . . . . . 120 Impact assessment . . . . . . 166, 13, 49, 84, 92, 123, 178 Impact attribution . . . . . . 179 Impact evaluation . . . . . . 9, 10, 16, 23, 37, 64, 70, 73, 105, 151, 166 Impact Evaluation RD Monetary Transfers . . . . . . 231
ABSTRACT BOOK

251

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

Impact model . . . . . . 84 Impact of ethical codes and standards . . . . . . 47 Impact of evaluation . . . . . . 117 Impunity . . . . . . 44 Inclusion . . . . . . 135 Inclusiveness . . . . . . 113 Income generating activities . . . . . . 211 Independence . . . . . . 85 Independent Evaluation Group . . . . . . 176 Index . . . . . . 13 Indicator . . . . . . 84 Indicators . . . . . . 155, 165, 189, 199 Indicators based on administrative reports . . . . . . 50 Industrial Engineering . . . . . . 237 Industry promotion . . . . . . 96 Influence area . . . . . . 239 Influencing efforts . . . . . . 98 Information dissemination . . . . . . 85 Information management . . . . . . 140 Information society . . . . . . 203 Infrastructure . . . . . . 225 Initiative civil society capacity development . . . . . . 57 Innovation . . . . . . 112, 83, 124, 137, 169, 189, 189 Innovation Policy . . . . . . 113, 120, 178 Innovation Policy Evaluation . . . . . . 135 Innovative Methods . . . . . . 106 Innovativeness . . . . . . 100 Institutional evaluation . . . . . . 20 Institutional Partnership . . . . . . 232, 235 Institutionalisation . . . . . . 31 Institutionalization . . . . . . 35, 207, 219 Instrument design . . . . . . 115 Instrumental . . . . . . 30 Insurance . . . . . . 230 Integrated Approach . . . . . . 119 Intended Users . . . . . . 236 Interaction research . . . . . . 31 Interactive Strategies . . . . . . 43 Interest groups . . . . . . 169 Intergovernmental organisations . . . . . . 102 Internal Evaluation Units . . . . . . 60 International . . . . . . 172 International commitments . . . . . . 76 International development . . . . . . 88, 98, 143, 156, 177, 182, 208, 222 International development cooperation . . . . . . 185 International development evaluation . . . . . . 130 Internet community . . . . . . 234 Intervention logic . . . . . . 73, 131 Intervention paths . . . . . . 153 Introduction . . . . . . 206 Inventory . . . . . . 56

Lag . . . . . . 183 Language . . . . . . 127 Large scale evaluation . . . . . . 80 Latin America . . . . . . 227 Leader . . . . . . 124, 183 Learning . . . . . . 65, 158, 164, 181 Learning by evaluation . . . . . . 59 Learning evaluation . . . . . . 117 Learning from evaluations . . . . . . 179 Learning outcomes . . . . . . 114, 114 Legislation . . . . . . 6 Legitimacy . . . . . . 127 Lessons . . . . . . 231 Lifetime . . . . . . 152 LinkedIn . . . . . . 175 Livelihoods . . . . . . 155, 211 Loans for SMEs . . . . . . 78 Local authorities . . . . . . 225 Local climate change adaptation evaluation . . . . . . 102 Local Development . . . . . . 62, 175 Local government . . . . . . 219 Local job markets . . . . . . 71 Logic model . . . . . . 243 Logic models . . . . . . 119 Logigram . . . . . . 240 Longitudinal Evaluation . . . . . . 39 Loose coupling . . . . . . 127 Low and-middle income countries . . . . . . 241

M
M&E . . . . . . 166, 179 M&E capacities . . . . . . 218 M&E capacity development . . . . . . 176 M&E system . . . . . . 176 M&E systems . . . . . . 195 M&E Tools . . . . . . 42 M&E use . . . . . . 176 Managed clinical network . . . . . . 40 Management . . . . . . 194 Management of research infrastructure . . . . . . 48 Managerial practice . . . . . . 35 Managing evaluations . . . . . . 59 Managing expectations . . . . . . 31 Managment of Evaluations . . . . . . 60 Marginalised children . . . . . . 228 Marginalization . . . . . . 211 Marginalized groups . . . . . . 44 Matched group design . . . . . . 9 Measurable, Reportable, and Verifiable (MRV) System . . . . . . 56 Measure . . . . . . 117 Measurement . . . . . . 14, 53, 90 Measurement construct . . . . . . 115 Megaprojects . . . . . . 181 Mechanisms . . . . . . 34 MENA region . . . . . . 70 MET Study . . . . . . 101 Meta-analysis . . . . . . 100 Metaevaluation . . . . . . 170 Meta-evaluation . . . . . . 102, 103, 179 Method development . . . . . . 146 Method triangulation . . . . . . 146 Methodologies and practices . . . . . . 236 Methodology . . . . . . 163, 90, 99, 119, 129, 206 Methods . . . . . . 89, 184, 14, 37, 100, 152, 189 Methods for formative impact evaluation . . . . . . 8 Methods for summative impact evaluation . . . . . . 8 Mexico . . . . . . 50 Mitigation . . . . . . 16 Mixed methods . . . . . . 134, 230
ABSTRACT BOOK

J
Joint Evaluation . . . . . . 80, 197, 204 Joint evaluation theory . . . . . . 204 Joint multi-donor / system wide evaluation . . . . . . 23 Joint/System wide evaluation . . . . . . 22

K
Knowledge Knowledge Knowledge Knowledge Knowledge accumulation . . . . . . 191 change . . . . . . 242 production . . . . . . 89 transfer . . . . . . 158 translation . . . . . . 126

L
Labor Policies . . . . . . 149 Labour market reintegration . . . . . . 70
W W W. E U R O P E A N E VA L U AT I O N . O R G

252

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

Mobile Technology . . . . . . 135 Modeling . . . . . . 153 Monitoring . . . . . . 24, 35, 42, 53, 152, 164, 164, 185 Monitoring and Evaluation . . . . . . 66, 81, 128, 207 Monitoring and evaluation (M&E) . . . . . . 208 Monitoring and Evaluation (M&E) Systems . . . . . . 56 Monitoring and evaluation use and influence . . . . . . 97 Monitoring culture change . . . . . . 121 Monitoring System . . . . . . 145 Monte Carlo Simulation . . . . . . 166 Multi-actor systems . . . . . . 9 Multi-disciplinary . . . . . . 73 Multi-level . . . . . . 53, 243 Multilevel governance . . . . . . 62 Multi-level governance . . . . . . 183 Multimedia . . . . . . 64 Multi-method perspective . . . . . . 232, 235 Multi-organizational evaluation . . . . . . 204 Multiple level evaluation . . . . . . 80 Multiple methods . . . . . . 205 Multiple stakeholders . . . . . . 219 Multi-site evaluation . . . . . . 112, 197 Multistakeholder evaluation . . . . . . 131

Opportunities . . . . . . 16, 98 Organisational culture . . . . . . 121 Organisational evaluation capacity building . . . . . . 194 Organisational learning . . . . . . 158 Organizational Assessment . . . . . . 11 Organizational learning . . . . . . 19, 92 Organizations . . . . . . 126 Outcome . . . . . . 12 Outcome indicators . . . . . . 76 Outcomes and impact . . . . . . 13

P
Pakistan . . . . . . 22 Panel meeting . . . . . . 241 Paris Declaration . . . . . . 195 Participation . . . . . . 104, 219, 11, 20, 202, 233 Participatory accountability . . . . . . 114, 114, 114 Participatory assessment process . . . . . . 242 Participatory evaluation . . . . . . 176, 232, 235 Participatory methodologies . . . . . . 226 Participatory methods . . . . . . 222 Participatory monitoring and evaluation . . . . . . 92 Participatory processes . . . . . . 17 Pastoralism . . . . . . 211 Paternity leave . . . . . . 228 Patient and public involvement . . . . . . 226 Patients empowerment . . . . . . 239 Payment by results . . . . . . 58 Peace Building . . . . . . 119 Peer review . . . . . . 114, 114, 11, 85, 146, 241 Performance . . . . . . 53, 127, 199, 200, 213 Performance audit . . . . . . 32, 161 Performance contracts . . . . . . 48 Performance improvement . . . . . . 29 Performance indicators . . . . . . 77, 237 Performance management . . . . . . 19, 95, 199 Performance measurement . . . . . . 95, 77, 237 Performance monitoring . . . . . . 95, 177 Performance Results . . . . . . 237 Personality disorder . . . . . . 40 Philanthropy . . . . . . 137 Planning . . . . . . 164, 164, 237 Plant breeding . . . . . . 234 PMS DD Poverty Consumption Well-being . . . . . . 231 Poland . . . . . . 158 Policy . . . . . . 6, 170 Policy analytical work . . . . . . 31 Policy and actor integration . . . . . . 62 Policy design based on evaluations . . . . . . 74 Policy Evaluation . . . . . . 113, 139 Policy implementation . . . . . . 9, 45, 59 Policy influence . . . . . . 143 Policy information . . . . . . 165 Policy of fostering and supporting women in science, engineering and technology . . . . . . 151 Policy simulation . . . . . . 71 Policy use . . . . . . 179 Politics . . . . . . 30, 222 Portfolio-evaluation . . . . . . 230 Post-doctoral . . . . . . 48 Power of Evaluation . . . . . . 68 Practice . . . . . . 15 Practices . . . . . . 234 Prevention . . . . . . 239 Private Finance in Infrastructure . . . . . . 51, 166 Process . . . . . . 46 Process consultation model . . . . . . 30 Process tracing . . . . . . 143 Professional and political confidence . . . . . . 114, 114, 114, 114
ABSTRACT BOOK

N
National Action Plan for Reducing Greenhouse Gas Emissions (RAN-GRK) . . . . . . 56 National association . . . . . . 96 National Evaluation Capacity . . . . . . 170 National Evaluation Society . . . . . . 176 Nationally Appropriate Mitigation Actions (NAMAs) . . . . . . 56 Neo-pluralism . . . . . . 72 Nepal . . . . . . 92 Network . . . . . . 106, 124, 129, 144, 195, 209, 210, 234 Network additionality . . . . . . 145 Network analysis . . . . . . 37, 232 Network and longitudinal analysis . . . . . . 81 Network evaluation . . . . . . 162 Network governance . . . . . . 207 Network mapping . . . . . . 38 Network of evaluators . . . . . . 36 Networked management model . . . . . . 82 Networking . . . . . . 209 Networking activities . . . . . . 118 Networking and civil society . . . . . . 151 Networks . . . . . . 42, 162, 182 New methodology . . . . . . 159 New public management . . . . . . 216 New technologies in Democracy projects . . . . . . 151 New use of Evaluation . . . . . . 199 NGO . . . . . . 106 NGOs . . . . . . 217 Non-governmental organisations . . . . . . 102

O
On-going . . . . . . 189 Ongoing evaluation . . . . . . 226 Online engagement . . . . . . 176 Online learning . . . . . . 241 Online qualitative evaluation . . . . . . 202 Online training . . . . . . 156 Ootcome . . . . . . 52 Open consultation . . . . . . 50 Open Data . . . . . . 202 Open government . . . . . . 72 Open governments . . . . . . 127 Open source survey tools . . . . . . 202 Operational management . . . . . . 82 Operations Research . . . . . . 147
W W W. E U R O P E A N E VA L U AT I O N . O R G

253

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

Professional confidence . . . . . . 114 Professional designation . . . . . . 94 Professional ethics . . . . . . 140 Professional role of the evaluator . . . . . . 243 Professionalization . . . . . . 208 Professionalization of evaluation . . . . . . 218 Profound Learning Disability . . . . . . 39 Program . . . . . . 209 Program evaluation . . . . . . 83 Program logic . . . . . . 132 Program theory evaluation . . . . . . 158 Programme evaluation . . . . . . 119, 130, 225 Programme logic . . . . . . 143 Programme management . . . . . . 178 Programmes . . . . . . 170 PROGRESA/Oportunidades . . . . . . 172 Progress assessment . . . . . . 129 Project Appraisal . . . . . . 51 Protected areas management . . . . . . 211 Protection of data . . . . . . 140 Public administration . . . . . . 118 Public interest . . . . . . 33 Public management . . . . . . 6, 237 Public opinion . . . . . . 159 Public Policies . . . . . . 149 Public policy . . . . . . 89, 144, 187, 195 Public Reporting of Performance Measures . . . . . . 29 Public-Private Partnerships . . . . . . 51, 166 Pupil survey . . . . . . 223

Q
Qualitative methods . . . . . . 120 Qualitative research . . . . . . 243 Qualitative study . . . . . . 39 Quality . . . . . . 15, 139 Quality assurance . . . . . . 186 Quality control . . . . . . 85, 170 Quality of Evaluations . . . . . . 60 Quality of Life . . . . . . 39 Quality of teaching . . . . . . 114 Quasi-experimental . . . . . . 242 Quasi-experimental design . . . . . . 9

R
R&D evaluation . . . . . . 123 Radio spots . . . . . . 239 Radioactive waste disposal . . . . . . 181 Randomized controlled trial . . . . . . 16 Randomized Controlled Trials . . . . . . 172 Rapid Response Evaluation . . . . . . 43 Rational policy-making . . . . . . 169 Rationalisation . . . . . . 216 Readiness Assessment Evidence . . . . . . 150 Real time . . . . . . 22 Real time evaluation . . . . . . 22, 43, 135, 236 Realist evaluation . . . . . . 34, 89, 198 Realistic evaluation . . . . . . 217 Real-Time Evaluation . . . . . . 22, 65, 172 Reconstruction and rehabilitation . . . . . . 24 Recursive logic model . . . . . . 20 Redundancy . . . . . . 70 Reference laboratories . . . . . . 237 Reflexive . . . . . . 35 Regional development . . . . . . 108 Regional development organizations . . . . . . 236 Regional Economic Communities (Recs) . . . . . . 187 Regional governance . . . . . . 37 Regional policy . . . . . . 45, 230 Regions . . . . . . 168
W W W. E U R O P E A N E VA L U AT I O N . O R G

Regions with specific geographical features . . . . . . 108 Regulatory impact assessment . . . . . . 72 Relevance . . . . . . 109 Replication . . . . . . 182 Reporting technology . . . . . . 96 Research . . . . . . 48, 138 Research and development . . . . . . 178 Research and Development activities . . . . . . 145 Research evaluation . . . . . . 88 Research for development . . . . . . 77 Research infrastructure . . . . . . 48 Research management . . . . . . 30 Research method . . . . . . 117 Research policy . . . . . . 48 Research Quality . . . . . . 126 Research tools . . . . . . 45 Research training . . . . . . 234 Research with surveys . . . . . . 223 Research, development and innovation technology . . . . . . 45 Researcher-public servant . . . . . . 118 Resilience . . . . . . 155 Resisting indicators . . . . . . 121 Resources . . . . . . 219 Responsiveness principle . . . . . . 186 Result . . . . . . 131 Results . . . . . . 19, 106 Results-based aid . . . . . . 58 Results-based management . . . . . . 95, 50 Results-oriented approaches . . . . . . 58 Retrospective analysis . . . . . . 224 Rights . . . . . . 188 Rigor . . . . . . 45 RIS3 . . . . . . 146 Risk Analysis . . . . . . 51, 222 Risk Assessment . . . . . . 116 Risk management . . . . . . 31 Role of the evaluator . . . . . . 29 RTD Programme Evaluation . . . . . . 145 Rural development . . . . . . 124, 183 Rural China . . . . . . 184 Rural network programmes . . . . . . 73 Rural Women . . . . . . 104 Rwanda . . . . . . 195

S
Sanitation . . . . . . 13 Scale . . . . . . 224 Science . . . . . . 48 Sector evaluation . . . . . . 185 Self-assessment . . . . . . 183 Self-evaluation . . . . . . 11 Senegal . . . . . . 102 Service voucher . . . . . . 78 Shared learning . . . . . . 131 School development . . . . . . 223 School inspection . . . . . . 139 Simulation . . . . . . 229 Sistematizacin . . . . . . 227 Small and medium-sized Enterprises (SMEs) . . . . . . 78 Smart learning labs . . . . . . 64 Smart Specialisation . . . . . . 146 Social . . . . . . 225 Social Accountability . . . . . . 104, 147 Social construction . . . . . . 117 Social entrepreneurship . . . . . . 175 Social Inclusiveness . . . . . . 135 Social media . . . . . . 11, 106, 175, 176 Social Network . . . . . . 232, 235 Social Network Analysis . . . . . . 36, 145, 145
ABSTRACT BOOK

254

TH E 10 T H EES BIENN IAL CON F EREN CE 3 5 OCTOBER, 2012, HEL SINK I, FIN LAN D

Social networking . . . . . . 175 Social policies . . . . . . 81 Social policy . . . . . . 228 Social Programs . . . . . . 50 Social Sector . . . . . . 196 Social services . . . . . . 24 Social work . . . . . . 79, 117 Societal embedding . . . . . . 123 Socio-economic evaluation . . . . . . 181 Sophistication . . . . . . 45 South Africa . . . . . . 166 South Central Somalia . . . . . . 23 Spaeaking Truth to Power . . . . . . 68 Speed . . . . . . 135 Stakeholder involvement . . . . . . 17, 177 Stakeholders . . . . . . 17, 236 Standard . . . . . . 96 Standards . . . . . . 15 State accountability . . . . . . 44 Statistical information system . . . . . . 42 Statutory reporting . . . . . . 96 Strategic Planning . . . . . . 174 Strategy development . . . . . . 111, 159 Strategy evaluation . . . . . . 153 Strategy guide . . . . . . 92 Structural funds . . . . . . 48, 130, 131, 168, 191, 225 Structural impediments . . . . . . 191 Summative . . . . . . 158 Surveys . . . . . . 177 Sustainability . . . . . . 62, 152, 182 Sustainable Development . . . . . . 145 Sustainable transport . . . . . . 111 Sweden . . . . . . 226 Switzerland . . . . . . 185, 203 Symbolic . . . . . . 30 System assessment . . . . . . 123 System wide evaluations . . . . . . 65 Systematization approach . . . . . . 131 Systemic Approaches . . . . . . 227 Systemic thinking in evaluation . . . . . . 131 Systems . . . . . . 132, 209 Systems methods . . . . . . 205 Systems thinking . . . . . . 41, 91, 172

Transdiscipline . . . . . . 109 Translation . . . . . . 226 Transparency . . . . . . 85, 85, 140 Trident method . . . . . . 40 Turbulence . . . . . . 215 Typology . . . . . . 29

U
Uganda . . . . . . 195 Ukraine . . . . . . 239 UN Coordination . . . . . . 205 Undeclared work . . . . . . 78 United Nations . . . . . . 182 University evaluation . . . . . . 186 Urban planning . . . . . . 219 USA . . . . . . 53 Use . . . . . . 30 Use of evaluations . . . . . . 5 Use of research . . . . . . 59 User led evaluation . . . . . . 22 User needs . . . . . . 234 User oriented assessment approach . . . . . . 49 Uses of evaluation . . . . . . 114 Utilization . . . . . . 19, 43, 50 Utilization focused evaluation . . . . . . 197 Utilization-focused evaluation . . . . . . 202

V
Validity . . . . . . 109 Valuation . . . . . . 33 Value for money . . . . . . 62, 152 Value of Evaluation . . . . . . 174 Valuing . . . . . . 33 Valuing outcomes . . . . . . 237 Video . . . . . . 46, 135 Vignette analysis . . . . . . 160 Village mapping . . . . . . 129 Virtual networks . . . . . . 81 Vocational Education and Training . . . . . . 179 Vocational training . . . . . . 185 Volunteerism . . . . . . 90 Voucher and Cash transfer programming . . . . . . 231

w
Ward Health Development Committees (WHDCs) . . . . . . 147 Web 2.0 . . . . . . 233 Welfare services . . . . . . 161 Women . . . . . . 25 Women empowerment . . . . . . 104 Womens rights and citizenship . . . . . . 44 Work-life balance . . . . . . 78 World bank . . . . . . 176 Worst-off groups . . . . . . 141

T
Tacit knowledge . . . . . . 100 Teacher . . . . . . 185 Teacher Evaluation . . . . . . 101 Technical requirements . . . . . . 233 Techniques . . . . . . 100 Technological Change . . . . . . 174 Technological Innovations M-PESA mobile technology . . . . . . 54 Technology . . . . . . 101, 106 Technology For Development . . . . . . 192 Technology transfer . . . . . . 158 Technology-Enhanced Active Learning . . . . . . 64 The European Regional Development Fund . . . . . . 154 The RATE Project . . . . . . 101 Thematic Review . . . . . . 205 Theories . . . . . . 109 Theory based evaluation . . . . . . 10, 34, 79, 130, 130 Theory of change . . . . . . 13, 73, 76, 108, 143, 162, 198 Theory of change testing . . . . . . 143 Theory-based . . . . . . 189 Theory-based evaluation . . . . . . 112, 6, 153, 215, 216 Theory-based impact evaluation . . . . . . 145, 230 Tobacco control . . . . . . 241 Tracking . . . . . . 12, 189 Training program . . . . . . 105
W W W. E U R O P E A N E VA L U AT I O N . O R G

Y
Youth Employment . . . . . . 70

Z
Zuba . . . . . . 185

ABSTRACT BOOK

255

EES Secretariat & Conference department CZECH-IN s. r. o. Profesional Event & Congress Organiser 5. kvetna 65, 140 21 Prague 4, Czech Republic Tel.: +420 261 174 309, Fax: +420 261 174 307 E-mail: secretariat@europeanevaluation.org

Vous aimerez peut-être aussi