Vous êtes sur la page 1sur 59

ILO Evaluation

Manager Handbook
EVALUATION UNIT
This ILO evaluation management handbook is written for any ILO selected
staff tasked to manage an evaluation. Through these guidelines, ILO staff
will develop key capabilities for delivery of organizational objectives.

ILO

International
Labour
Organization

Geneva, 2013
21
The ILO Evaluation Management Handbook

ILO Evaluation Manager Handbook


Contents

Introduction ............................................................................................................................................ 4
1. Rationale for evaluations in the ILO and UN System .......................................................................... 4
1.1 The UN System: international context for evaluation .................................................................. 4
1.2 The International Labour Organization......................................................................................... 5
1.2.1 General overview to the Agency ............................................................................................ 5
2. The ILO Policy for Evaluation: building the ILO Evaluation System .................................................... 7
2.1 Evaluation in the ILO .................................................................................................................... 7
2.2 ILO Evaluation Principles ............................................................................................................... 7
2.3 The ILO Evaluations ....................................................................................................................... 8
2.4 Decentralized evaluations by degree of independence ............................................................. 13
2.5 Required evaluations according to project characteristics......................................................... 15
3. The ILO core mandate ....................................................................................................................... 16
3.1 Results-Based Management (RBM) framework in the ILO ......................................................... 16
3.2 Human Rights and Gender Equality ............................................................................................ 19
3.2.1 Human Rights Based approach in Evaluation ...................................................................... 19
3.2.2 Gender Equality and Evaluation........................................................................................... 20
4. The Evaluation Management: roles and responsibilities .................................................................. 24
4.1 Fostering participatory processes with project staff and stakeholders ..................................... 28
5. Initial management of the evaluation .............................................................................................. 29
5.1 Appointment of the evaluation manager ................................................................................... 29
5.2 Designing a Terms of Reference ................................................................................................. 29
5.3 Preparing and approving a Terms of Reference ......................................................................... 37
5.4Additional preparations for an evaluation................................................................................... 38
5.4.1 The evaluation budget ......................................................................................................... 38
5.4.2 Evaluation Schedule ............................................................................................................. 39
5.4.3 Information .......................................................................................................................... 40
5.4.4 The Evaluation Consultant ................................................................................................... 41
5.4.4.1 The specific process of selecting a consultant .............................................................. 42
5.4.4.2. ILO Contracts ................................................................................................................ 42
6. Initial briefings: management of the evaluation work plan ............................................................. 43
7. Team Management ........................................................................................................................... 44

1
The ILO Evaluation Management Handbook

8. Drawing conclusions: the evaluation report ..................................................................................... 46


8.1 Verbal evaluation report ............................................................................................................. 46
8.2 Written evaluation report ........................................................................................................... 47
8.3 Approving the evaluation report ................................................................................................ 49
9. Dissemination of the Evaluation Report ........................................................................................... 51
10. Enhancing the use of evaluations ................................................................................................... 52
11. Ethic and cultural sensitivity at the evaluation management level ................................................ 52
REFERENCES .......................................................................................................................................... 54
Annex A. Quantitative and Qualitative tools for evaluation................................................................. 57

2
The ILO Evaluation Management Handbook

Acronyms

DSA Daily Subsistence Allowance

DWCP Decent Work Country Programme

EVAL Evaluation Unit

GB Governing Body

GE Gender Equality

HQ Headquarters

HR Human Rights

ILO International Labour Organization

M&E Monitoring and Evaluation

OECD/DAC Organization for Economic Cooperation and Development/ Development


Assistance Committee

PARDEV ILO Partnerships and Development Cooperation Department

P&B Programme and Budget

RBM Results-Based Management

REO Regional Evaluation Officer

SEFP Sectoral Evaluation Focal Point

SPF Strategic Policy Framework

TOR Terms of Reference

UNDAF United Nations Development Framework

UNEG United Nations Evaluation Group

3
The ILO Evaluation Management Handbook

Introduction
This ILO evaluation management handbook is written for any ILO selected staff tasked to
manage an evaluation. Through these guidelines, ILO staff will develop key capabilities for
delivery of organizational objectives. The content of this handbook is based on both the ILO
Evaluation Policy (ILO, 2005) and the ILO Results-Based Evaluation Strategy for 2011-15
(ILO, 2011) in order to define the ILO’s organizational approach and Results Based
Framework for evaluation. Additionally, the evaluation management strategy is framed on
the context of the Strategic Policy Framework 2010-15 (ILO, 2009) and the biennial
Programme and Budgets that operationalize the evaluation process inside the ILO.

The first three sections of the handbook provide an overview of the principles and rationale
guiding evaluations in the ILO and the UN System, and clarify the key concepts to
understand the ILO core mandate. They are included as an introduction to explain the added
value of the evaluation management to the organization in the frame of the results-based
management (RBM). Further, they are presented in coherence with the ILO adherence to the
good practices in evaluation within the International Principles of Evaluation Practice
(UNEG, 2005). The next seven sections focus on the ILO’s operational approach to
evaluation management. Duties and responsibilities are outlined for planning, managing and
conducting different types of evaluations in order to help evaluation managers develop
knowledge and skills for each stage. The last section provides information related to the inter-
personal soft skills required for the ILO staff to work in accordance with the highest
standards of the Ethical Code of Conduct for evaluation in the UN System (UNEG, 2008).

1. Rationale for evaluations in the ILO and UN System


1.1 The UN System: international context for evaluation
Evaluation in the UN System is integrated into the general framework established by the
purposes of the United Nations. As stated in Article 1 of the United Nations’ Charter (UN,
1985), these main purposes are related to:

1. Maintain international peace and security;


2. Develop friendly relations among nations based on respect for the principle of
equal rights and self-determination of people, and to take other appropriate measures
to strengthen universal peace;
3. Achieve international co-operation in solving international problems, and in
promoting and encouraging respect for human rights and for fundamental freedoms
for all without distinction as to race, sex, language, or religion; and
4. Be a centre for harmonizing the actions of nations in the attainment of these
common ends.

These general purposes are carried out by different institutions that operate in the UN
System. Six principal organs, fifteen agencies, and several programmes and bodies contribute
to ensure that the benefits resulting from the policies’ implementation fulfil the obligations
assumed by all the members of the UN System in accordance with the United Nations

4
The ILO Evaluation Management Handbook

Charter. Additionally, specialized Agencies, related Organizations, Funds and other UN


entities work closely within the frame of the main Committees directly connected with the six
principal organs of the UN.

Box 1. UN Structure and Organization

Six principal organs form the main core of the International System:

- General Assembly is responsible for the maintenance of international peace and security,
and makes recommendations to the Members or to the Security Council, or to both.
- Security Council is formed by fifteen members of the UN, and it is in charge of the
maintenance of international peace and security.
- Economic and Social Council may make studies and make recommendations with respect
to international economic, social, cultural, educational, health and related matters to the
Members of the United Nations, and to the specialized agencies concerned.
-Trusteeship Council provides information to the General Assembly on the political, social,
economic and educational advancement of the inhabitants of territories.
- International Court of Justice works as main judicial organ of the UN.
- Secretariat acts in all meetings of the General Assembly, Security Council, Economic and
Social Council, and the Trusteeship Council.

In order to pursue each of the stated purposes of the Charter, several agencies and organs
work to support the main programmes related to the areas of Peace and Security,
Development, Human Rights, Humanitarian Affairs, and International Law. Hence,
evaluations in the UN System build their work in guaranteeing that each policy, programme
and/or project is working toward the expected goals and purposes of the UN.

1.2 The International Labour Organization

1.2.1 General overview to the Agency


The International Labour Organization is responsible for drawing up and overseeing
international labour standards, and to contribute to enhance Decent Work through promoting
social justice and internationally recognized human and labour rights, and pursuing its
founding mission of fostering labour peace as an essential principle to prosperity. To achieve
its goals, the ILO is built in a tripartite structure where the work is accomplished through
three main bodies, which comprise governments’, employers’ and workers’ representatives.
This unique structure of the ILO promotes equal voice to workers and employers together
with governments’ deliberations, showing social dialogue in action. This fosters free and
open debates between governments and social partners. It also ensures that the views of the
social partners are closely reflected in ILO standards, policies and programmes.

5
The ILO Evaluation Management Handbook

As stated in the ILO Declaration on Social Justice for a Fair Globalization (ILO, 2008), the
Organization is required to promote the ILO’s policy by enhancing its relevance to the world
of work, and to ensure the role of standards to achieve the constitutional objectives of the
Organization. Further, this Declaration emphasizes the inseparable, interrelated and mutually
supportive nature of the strategic objectives that guide the ILO goals, which are fostered by
the Strategic Policy Framework (SPF). The SPF is the ILO’s medium-term planning
document that reflects the four equally important Strategic Objectives of employment, social
protection, social dialogue and rights at work, through which the Decent Work ILO Agenda
is expressed. The Strategic Policy Framework includes, therefore, the main goals and
methods to achieve the medium-terms results of this Agenda, captured in nineteen outcomes,
indicators and target through the definition of:

• the expected results for the biennium;


• the strategies the ILO implements to achieve them; and
• the resources available to do so.

Box 2. The ILO Strategic Objectives

- To promote and to realize standards and fundamental principles and rights at work;
- To create greater opportunities for women and men to decent employment and income;
- To enhance the coverage and effectiveness of social protection for all;
- To strengthen Tripartism and Social Dialogue.

The financial support of these activities is organized in the Programme and Budget
framework. It sets out the strategic objectives and expected outcomes for the Organization’s
work approved every two years by the International Labour Conference. Centred on the
essential priorities of the Decent Work Agenda, the ILO Programme and Budget includes the
organizations’ strategies that will be implemented in order to achieve results over the
biennium, alongside the capacities and the resources required to deliver those results. It
includes, therefore, specific budget information to achieve the results through the below-
mentioned three main funding sources:

• Regular Budget;
• Regular Budget Supplementary Account; and
• Extra-Budgetary resources for Technical Cooperation.

The goals captured at the Strategic Policy Framework and the ILO’s biennial programme of
work are delivered through Decent Work Country Programmes (DWCP)) that operates as the
main instrument for the ILO cooperation over a period of time in a specific country. Such
programmes identify a limited number of priorities and set outcomes to be achieved, which
are the basis for programme activities and resources. Furthermore, Decent Work Country
Programmes are operationalized in various projects that work as main means to support
DWCP priorities and outcomes. All these interventions are, therefore, evaluated to assess
their relevance, effectiveness, efficiency, impact and sustainability. These evaluations also

6
The ILO Evaluation Management Handbook

contribute to improvement of organizational learning by introducing the lessons-learned and


good practices into the decision-making process.

2. The ILO Policy for Evaluation: building the ILO Evaluation System
The Governing Body paper entitled Results-Based Strategies 2011-2015- Evaluation
Strategy- Strengthening the Use of Evaluation, the Director-General’s Announcement on
Evaluation in the ILO, and the ILO Evaluation Unit Office’s Directive conceive evaluation as
the procedures required to support improvements in programmes and policies and to promote
accountability and learning. In terms of the alignment with the international evaluation
framework, this concept is consistent with the UNEG Norms for the UN System, which
states: “The purposes of evaluation include understanding why, and the extent to which,
intended and unintended [positive and negative] results are achieved, and their impact on
stakeholders. Evaluation is an important source of evidence about the achievement of results
and institutional performance. As an important contributor to building knowledge and to
organizational learning, evaluation is an important agent of change and plays a critical and
credible role in supporting accountability.” (UNEG, 2005).

2.1 Evaluation in the ILO


As an evidence-based assessment of strategy, policy or programme and project outcomes,
evaluation determines their relevance, impact, effectiveness, efficiency and sustainability.
Hence, the evaluation in the ILO can be defined as the assessment of an intervention that is
focused on what worked, what did not work, and why this was the case. The evaluation
process also examines if the best approach was taken, and if it was optimally executed.
Additionally, an evaluation should provide information that is credible and useful, enabling
the incorporation of lessons learned into the decision-making process of both recipients and
donors (OECD/DAC, 2002).

The aim of the evaluation in the ILO is to support improvements in programmes and policies
and to promote accountability and learning. Hence, the evaluation strategy in the ILO is
designed in the Strategic Policy Framework 2010-15 to strengthen knowledge development
and accountability concerning decent work, international labour rights and standards, and to
enhance the relevance and usefulness of the evaluation to Constituents.

2.2 ILO Evaluation Principles


The evaluation within the ILO Agency is guided by the ILO Evaluation Policy (2005), and
the ILO Results-Based Evaluation Strategy for 2011-15 that work in accordance with the
OECD/DAC criteria and the UNEG Norms and Standards. As stated in the ILO Evaluation
Policy, the core strategies that build the evaluation function are related to the four main
pillars of (i) knowledge sharing, (ii) credible independent evaluation, (iii) reinforcing self-
evaluations, and (iv) building evaluation capacities.

Furthermore, the rest of the ILO’s evaluation function is built in six guiding principles
identified at the ILO Evaluation Unit, IGDS No. 74, 2009. The principles that are described
below are designed to ensure credibility of the function and evaluation results through:

7
The ILO Evaluation Management Handbook

• Usefulness: the selection, design and follow-up of evaluations aim for usefulness,
particularly to support decision-making.
• Impartiality: evaluation processes that minimize bias and protect impartiality at all
stages of the evaluation. The evaluation process should include reports that present a
complete and balanced evidences, findings, conclusions and recommendations.
• Independence: evaluators should be selected with due regard to avoiding potential
conflicts of interest.
• Quality: design, planning and implementation of evaluation processes that are
inherently quality oriented, covering appropriate methodologies for data collection,
analysis and interpretation.
• Competence: those engaged in designing, conducting and managing evaluation
activities shall have the necessary skills to conduct high quality and ethical work.
• Transparency and consultation: transparency and consultation with tripartite
constituents, partners, and stakeholders are presented at all stages of the evaluation
process.

The ILO specific principles for evaluation set up to achieve the above-mentioned guiding
principles while managing the evaluation process are:

• Limited management influence over the Terms of Reference, scope of the evaluation
and selection of evaluators;
• Involvement of constituents and others as appropriate, in the planning,
implementation and reporting process;
• Upholding the ILO mandate and mission by selecting an evaluation approach and
methods that reflect the tripartite organization and its focus on social justice and its
normative and technical mandate; and
• Adequacy of treatment of core ILO cross-cutting priorities, such as gender equality
and non-discrimination, promotion of standards, tripartite processes and constituent
capacity development.

2.3 ILO Evaluations


The ILO Evaluation Policy establishes different types of evaluation that serve the varying
needs of the organization. The evaluation managers are responsible for managing evaluations
and implementing those that lie in the line management domain.

The ILO evaluation function has incorporated a combination of governance-level and


decentralized evaluation responsibilities since 2005. Independent strategy and Decent Work
Country Programme (DWCP) evaluations are governance-level evaluations, managed or
coordinated directly by the Evaluation Unit at the Headquarters (EVAL) and are considered
centralized. All other types of evaluations are decentralized and their direct management is
primarily the responsibility of sectors and regions. Decentralized evaluations are classified as
thematic evaluations, project evaluations, impact and joint evaluations, as well as all forms of
internal review, including self-evaluations.

8
The ILO Evaluation Management Handbook

Governance-level evaluations are conducted to generate insights into organizational-level


performance within the context of the Result-Based Management. Strategy and DWCP
evaluations are two types of high-level evaluations managed and commissioned by EVAL.
Hence, both the ILO senior management and the Governing Body (GB) participate in the
process of identifying evaluation priorities and determining the timing and intended uses of
each high-level evaluation. To achieve its main goals, a process of informal consultations
with governments, through regional coordinators and the secretariats of the Employers’ and
Workers’ groups are organized annually to discuss the topics for high-level strategic
evaluations and their Terms of Reference (ToRs).

Decentralized evaluations assess programmatic areas that are more directly under the control
of managers, such as technical cooperation and implementation of Country Programmes.

In terms of quality control purposes, the independent evaluation TORs, the budgets, the
selection of consultants and the identification of methodologies are overseen by sector focal
points or regional evaluation officers. In this context, the role of EVAL is to focus on quality
control and technical support to sectors and regions as requested, to profile evaluation results
and to share experiences to promote organizational learning. As for the decentralized
evaluations, the responsibility for conducting and financing them remain in those managing
the projects or programmes.

The main purpose, designated responsibilities and timing of the different types of evaluations
in the ILO are summarized in the table below (ILO, 2012):

Table 1. Types of Evaluation

Type of Main purpose Responsibility Timing


evaluation

Strategy, Policy • Review major policies • Eval to plan and Two each
or institutional manage. year;
Governance level

issues. • Governing Body additional as


Independent

• Assess impact, and senior mandated


effectiveness and management and
benefits of ILO confirming resourced.
core strategies. topics.
• Improve strategies and • EAC reviewing
policies. follow-up.

9
The ILO Evaluation Management Handbook

Decent Work • Assess the extent to • EVAL to plan and Eval will
Country which significant manage. conduct at
Programme impact is being • Regional Offices least one
made towards responsible for each year
Independent / external

decent work and financing and support


Governance level

related Country internal Country regions to


Programme Programme internally
Outcome set in the Reviews. evaluate a
P&B. number of
• Feed into country DWCP and
tripartite dialogue Country
on impact, Programme
effectiveness and Reviews.
relevance of ILO
action at the
country level.
Thematic • Develop cross-cutting • Technical sectors, Based on
Evaluation lessons, including other technical work plans
success stories to groups and of thematic
innovate and feed regions to plan evaluations.
into sectoral/ and manage.
regional learning on • EVAL to oversee
specific technical and support as
interventions and required.
Independent / Internal

strategies. • Technical
programmes
Decentralized

and regions
resources.
Impact evaluation • Assess effects and • Technical sectors, Based on
impact of specific other technical work plans
policy and groups and of impact
programme regions to plan evaluations.
interventions on and manage.
beneficiaries. • EVAL to oversee
and support as
required.
• Technical
programmes
and regions to
resource.
Joint • Assess jointly with • Management of Subject to
Evaluation partner ILO’s input to planning and
organizations, evaluation reporting
programmes where supervised by schedule
ILO is one of regional or according to
External / Joint
Decentralized

several managing sector-level project


and implementing evaluation document of
joint programmes. officers. agreement.
• EVAL provide
oversight on
quality and
compliance
• Cost to be covered
by a joint
programme.

10
The ILO Evaluation Management Handbook

Project • Assess projects for • Eds and RDs Mid-term or


Evaluation relevance, responsible for final or as
efficiency, ensuring stipulated in
effectiveness, application of the project
Independent / internal or self

sustainability and ILO evaluation evaluation


contribution to policy. plan.
Decentralized

broader impact. • Management of


• Appropriateness of evaluation
design to ILO’s supervised by
strategic and regional or sector
national decent level evaluation
work programme officers.
frameworks. • EVAL provides
oversight
• Cost of evaluation to
be included in
project budget.
ILO, 2012, ILO Policy Guidelines for results-based evaluation: principles, rationale, planning and managing for evaluations.
Evaluation Unit, Geneva.

Strategy and Policy Evaluations


Strategy and policy evaluations provide an account to the Governing Body regarding the
strategy results. They are focused on specific outcomes within the frameworks of the
Strategic Policy Framework (SPF) and Programme and Budget (P&B). These high-level
evaluations aim to assess relevance, efficiency and effectiveness (OECD/DAC criteria) and
identify potential for impact and sustainability of the SPF strategies.

The evaluation team is to be composed of one or more external consultant(s) and an ILO
independent evaluator with no prior links to the strategy or policy. These evaluations are
financed by EVAL through its regular budget and may benefit from cost-sharing with the
regions or sectors.

Strategy and policy evaluations are generally conducted over a six to nine month period,
usually following the November Governing Body approval of the selection. Furthermore,
they are finalized prior to the next November Governing Body meeting and the evaluation
summary report and subsequent status reports on implementation of the recommendations are
presented to the ILO’s Governing Body. High-level evaluations are reviewed by the
Evaluation Advisory Committee, which in turn reports to the Director-General on the
adequacy of follow-up.

Decent Work Country Programme (DWCP) evaluations


Decent Work Country Programmes are the main ILO support to countries in contribution to
the UN Country Programmes. The ILO independent evaluations of DWCPs seek to provide
the national and international partners with an impartial and transparent assessment of the
work done in these countries. These evaluations contribute to validate the achievement of
results and the ILO’s contribution towards national development objectives, decent work and
related Country Programme Outcomes that are included in the Programme and Budget, been
a summary of the DCWP evaluations presented to the Governing Body. DWCP evaluations

11
The ILO Evaluation Management Handbook

also generate information that can feed into country tripartite dialogue on the effectiveness,
relevance, and impact of ILO interventions at the country level.

EVAL manages at least one DWCP evaluation each year for reporting at the governance
level. All evaluations with regard to DWCP are posted on EVAL’s website. In addition,
internal reviews of DWCPs, also known as Country Programme Reviews, are conducted by
the regional offices and mainly serve organizational learning needs.

Thematic evaluations
Thematic evaluations are focused on particular sectors, issues, or approaches. They are
conducted to assess specific aspects, themes and processes of ILO’s technical work, and
provide the means for ILO technical programmes to explore the effectiveness and impact of
particular approaches in depth, which implies that this type of evaluation is applied towards
the end of interventions. These evaluations draw from lessons learned at project level, both
inside and outside the ILO, and focus on themes that have significance beyond a particular
project.

ILO technical programmes conduct thematic evaluations, with support from EVAL, and are
fully responsible for resourcing. All share responsibility for dissemination and follow-up.

Impact evaluations
Impact evaluations assess the “positive and negative, primary and secondary long term
effects produced by a development intervention, directly or indirectly, intended or
unintended” (OECD/DAC 2002). This type of evaluation is conducted to respond to the
growing demand for a more credible measurement of impact. In doing so, impact evaluations
are distinctive in its focus (conceptually and methodologically) to determine the form and
level of attribution that can be given to specific factors, including policies, programmes or
interventions. To address the fundamental aspect of inferring a direct result of a particular
factor and what would have happened if this factor had not existed, impact evaluations
typically attempt to establish a means by which to compare these two situations either
through a counterfactual or comparison group.

For quality control purposes, the independent impact evaluation ToRs, the budgets, the
selection of consultants, determination of methodologies and finalization of the report should
be done in coordination with EVAL. Furthermore, the Sector Executive Director approves the
topics and takes responsibility for completing the evaluation according to the ILO Evaluation
Standards. The ILO technical programmes are mainly responsible for conducting and
financing these evaluations.

Joint evaluations
Joint evaluations are development evaluations conducted collaboratively by more than one
agency. The focus is not on participatory evaluation with its techniques for bringing
stakeholder communities into the process, but on evaluations undertaken jointly by more than
one development cooperation agency. Joint evaluations are conducted as an expanding

12
The ILO Evaluation Management Handbook

portfolio of the evaluation work being planned, managed and financed jointly by the ILO and
national and international partners, the most prevalent of which have been linked to UNDAF
and Joint Programmes of the UN at country level. According to the OECD/DAC, joint
evaluations can help overcome attribution problems in assessing the effectiveness of
programs and strategies and the complementarities of efforts supported by different partners,
and the quality of aid coordination.

There may be varying degrees of collaboration among partners, depending on the extent to
which they cooperate in the evaluation process, merge their evaluation resources, and
combine their evaluation reporting. Additionally, any evaluation can be conducted as a joint
evaluation.

Technical aspects of the management of joint evaluations include the responsibility of


regional or sector-level evaluation officers for the ILO input to joint evaluations, and the
responsibility of EVAL in providing oversight on quality and compliance. Joint evaluations
should be financed from the joint programme resources.

Project evaluations
ILO project evaluations assess the relevance of project design with regard to the ILO’s
strategic and national policy frameworks. Additionally, they also consider the efficiency,
effectiveness and sustainability of the outcomes, and test the underlying assumptions about
contributions to broader development impacts. Hence, the ILO Project evaluations are used to
improve project performance and contribute towards organizational learning by helping the
resources and activities’ management of a project to enhance development results from short-
term to sustainable long-term along a plausibly linked chain of results. In addition, project
evaluations assess the effectiveness of planning and managing for future impacts during the
project cycle. A final function of project evaluations is to serve accountability purposes by
feeding lessons learned into the decision-making process of project stakeholders, including
donors and national partners.

Project evaluations are mainly conducted to move the decision-making processes closer to the
national partners, hence empowering local actors. In the context of project implementation,
this evaluation process provides space for reflection about how the ILO and its national
partners can better support each other to achieve the desired development results.

2.4 Decentralized evaluations by degree of independence


The ILO’s Evaluation Policy (ILO, 2005) identifies four types of evaluation that can be used
to evaluate projects. An overview of these four types and broader information about the
evaluation manager’s responsibilities are contained in the next table:

13
The ILO Evaluation Management Handbook

Table 2. Overview of the four types of evaluation


Management Evaluators Degree of Costs to the ILO
Impartiality
Self-evaluation ILO (including ILO (including Low Low
project project
management) management)
Internal evaluation ILO (including ILO (excluding Medium Medium
project project
management) management)
Independent ILO (excluding External Medium to High High
evaluation project (leadership) possibly
management) plus ILO (excluding
project
management)
External evaluation External External Medium to High Low
ILO, 2012, ILO Policy Guidelines for results-based evaluation: principles, rationale, planning and managing for evaluations.
Evaluation Unit, Geneva.

Self-evaluation

The ILO project management and technical specialists are responsible for the management
and implementation of this type of evaluation, which includes writing the Terms of Reference
(ToR), collecting and analysing data, and writing the evaluation report.

The methodology applied to answer the evaluation questions can be summarized in the
following three-step process explained below:

A. Develop the ToR.


B. Collect all the information available on the project from the project document, the
progress reports, from regular project monitoring, or from correspondence with
external partners. When additional information is needed, a process of conducting
focus groups or surveys can be also considered. This information should be circulated
in advance to the tripartite constituents, partners and stakeholders who will be invited
to attend the formal workshop.
C. Convene a Self-evaluation workshop inviting the tripartite constituents, partners and
stakeholders. The workshop will have three purposes:
(i) To gather additional information from the participants;
(ii) To answer the evaluation questions by reviewing and discussing the
information on the project; and
(iii) To make evidence-based recommendations on how the project should go
forward.

Internal evaluation

Internal evaluation is managed and implemented by ILO staff members, including project
management, technical specialists and backstoppers. These evaluations are conducted either
by independent consultants or by independent ILO officials who have not been involved in
the design, management or backstopping of the project. These also include self-evaluations,

14
The ILO Evaluation Management Handbook

which are managed and conducted solely by ILO staff members who are entrusted with the
design and delivery of an intervention (including project management, technical specialists
and backstoppers).

Independent evaluation

As the above mentioned internal evaluation, independent evaluations are managed by ILO
staff members who have not been involved in the design, management or backstopping of the
project to be evaluated. The feature that differentiates them is that independent evaluation is
implemented by external evaluators and is carried out by external evaluators who have no
previous links to the project. Other independent ILO officials may also participate as
evaluation team members.

External evaluation

This type of evaluation is managed from outside the ILO and implemented by external
evaluators who have no previous links to the project being evaluated. External evaluations are
usually initiated, led and financed by a donor agency. As with any evaluation, the ILO project
management is accountable for the follow-up.

2.5 Required evaluations according to project characteristics


The type and timing of an evaluation depend on the amount of funding and the duration of
the project under consideration. The next graphics summarize the ILO’s policy on required
project evaluations.

Graphic 1: ILO Policy for project evaluation requirements


Budget
Projects and multi-
phase projects with
combined budget > US$5 million
> US$1 million
> US$ 500,000
-Independent mid-
< US$ 500,000 Independent
term and final
Internal evaluation evaluation
evaluation -M&E appraisal
Self-evaluation -Evaluability review
within 1 year of start-

Duration

<18 months <30 months >30 months


Final evaluation Annual review and Annual reviews, mid-term
(internal or independent independent final evaluation and final
evaluation depending on evaluation evaluation (at least one of
budget size) which should be
independent)
15
The ILO Evaluation Management Handbook

While managing an evaluation, the ILO responsible staff must be aware that all reports
mentioned above must be sent to EVAL for storage, including internal evaluations of projects
with budgets above US$500,000.

3. The ILO core mandate


3.1 The Results-Based Management (RBM) framework in the ILO
Results-Based Management (RBM) applies to all stages of the ILO’s programming cycle,
including programme planning, implementation, reporting and evaluation. The ILO defines
the Results-Based Management Approach as the management perspective that drives
organizational processes, resources, products and services towards the achievement of
measurable outcomes. Additionally, the RBM perspective can also be defined as a coherent
framework for strategic planning and management based on learning and accountability as it
emphasizes the intended outcomes and the strategies needed to achieve them within the
programming cycle and helps to redefine strategies based on the new information.

RBM steers all stages of Decent Work Country Programmes (DWCPs), from formulation to
evaluation. It is used by ILO Country Office Directors to determine the optimal contribution
of the Office to Country Programmes strategies concerning Decent Work and to identify
resource gaps that merit attention from ILO funds or through donors. It drives the
identification of country priorities and country outcomes that have the widest possible
support among constituents and partners, as well as the strategies that all parties agree to help
implement. The key stages of this framework imply three main phases in terms of
programmes’ management: strategic planning, performance measurement and performance
management, connected with the monitoring and evaluation principles. As part of the
programming cycle, all ILO strategies, policies and programmes, including Country
Programmes, and technical cooperation programmes and projects are subject to evaluation in
line with the Result-Based Management principles and the ILO evaluation policy.

Within the logical framework of the RBM, the aim of evaluation is to support improvements
in programmes and to promote accountability and learning. Promoting the UNEG Norms for
the UN System (UNEG, 2005), the purposes of evaluation address the understanding of why,
and the extent to which the intended and unintended results are achieved, and their impact on
the stakeholders. Hence, the evaluation process provides “a distinct, essential and
complementary function to performance measurement and RBM.” The evaluation function
aims to provide information by monitoring regularly milestones in-depth consideration of
attribution, relevance, effectiveness and sustainability criteria. These monitoring systems help
the ILO managers and constituents to check the progresses towards the planned outcomes.
The evaluation also brings elements of independent judgment to the performance system and
provides recommendations for appropriate management action. For these reasons, the
evaluation process can be considered as an essential component of the Results-Based
Management Framework.

16
The ILO Evaluation Management Handbook

Graphic 2: Measuring a project intervention

ILO, 2011, Applying Results-Based Management in the International Labour Organization: A Guidebook, Geneva.

The Results-Based Management approach is based on five main pillars:

1. Definition of strategic goals included at the National Development Framework, the


International Development Framework and the ILO Programme and Budget;

2. Specification of expected results of the strategic goals and aligned interventions,


processes and resources behind them. For each priority in the DWCP a maximum of
three outcomes are normally specified. All ILO projects should support at least one or
more of the outcomes.

In order for the strategies to support and to achieve the outcomes, a logical results’
chain is designed. It implies the specific statement on how the strategy will be
achieved, and how the activities and outputs will contribute to the achievement of the
outcome. Inputs to carry out the activities and outputs defined by the strategy should
be clearly identified upfront. The results’ chain logical linear framework that helps
ILO staff understand accountabilities for causalities and contributions is presented in
the graphic below:

17
The ILO Evaluation Management Handbook

3. Monitoring and assessment of performance through the establishment of outcome


indicators able to state the criteria and data that will be used to verify or measure the
achievement of the outcomes.

As an on-going process to track the progress of a project, monitoring enables the


management to assess the progress of a project implementation, to detect problem
areas in achieving the primary objective, and to enable the managers to reorient the
areas of action. In doing so, a maximum of three indicators is identified for each
outcome. These indicators are monitored by Country Office. The first time that data is
collected on the indicators, a baseline is established through which the monitoring
process can be started.

Indicators should provide relevant and robust measures of progress towards the
targets, the goals and objectives of the programmes evaluated. Additionally, the
indicators have to be clear and straightforward to interpret and provide a basis for
comparison. They should be constructed from well-established data sources, be
quantifiable, as well as consistent to enable measurement over time.

The monitoring processes contribute towards achieving the evaluation goal of


systematically and objectively assess the on-going or completed project to determine
the relevance, and level of achievement of a project, and to feed lessons learned into
the decision making process. Additional information about indicators can be found at
box 3.

4. Improved accountability is based on continuous feedback to improve performance.


This feedback can be obtained through baselines and targets identified to logically
achieved the expected goals. Targets are the baseline measurement plus the amount of
improvement one hopes to realize. Targets are divided into time-bound (e.g.,
quarterly) increments called milestones. This feedback on milestones is continuously
reported to stakeholders.

In terms of the RBM approach, the inclusion of a continuous feedback enables


managers to deal with change towards the expected progress and include other
potential options and alternatives for better reaching a specific outcome.

5. Integration of lessons learned into future planning. These lessons have been learned
during the life cycle of a project that, when applied, are able to impact on operations
and guide practice in some concrete way.

18
The ILO Evaluation Management Handbook

Box 3 Keys to design accurate indicators

In the ILO the general applied indicators should be constructed on the basis of the SMART
principles. Smart is the acronym for indicators that are:
-Specific: the indicator is precise enough and related to the conditions of the project to
measure the progress towards the results;
-Measurable: the indicator is reliable and a clear measure of results, either numerically or in
terms of ranking preferences;
-Attainable: the results the indicator seeks to chart progress are realistic;
-Realistic and relevant: the indicators are objective concerned with the intended outputs;
-Time-bound: the indicators include timed milestones to show progress during the course of
the implementation and should be collected and reported on at the right time to influence
management decisions.

3.2 Human Rights and Gender Equality


The evaluation procedures are developed in accordance with the International Principles on
Human Rights, with especial attention to gender equality in monitoring and evaluation of
projects.

As stated in the UNEG Guidance Document on Integrating Human Rights and Gender
Equality in Evaluation (UNEG, 2011), “the UNEG Norms and Standards highlight the need
for people-centred evaluation and for evaluators to consider human rights and gender equality
in their work”. Therefore, those managing evaluations must ensure that evaluations give due
attention to inclusion of the principles of Human Rights and Gender Equality and non-
discrimination in the world of work.

3.2.1 Human Rights Based approach in Evaluation


The promotion and protection of Human Rights (HR), as well as gender equality, are central
principles to the mandate of the UN and all UN agencies, and they are included in the
evaluation process in an attempt to enhance their values in all the steps of the Results-Based
Management cycle.

Human rights principles can be classified as equality; non-discriminant; inclusive and


participative; accountable and rule of law. Setting an evaluation procedure in due accordance
with HR implies making it integral part of the design, implementation, monitoring and
evaluation of all policies and programmes, and to foster efforts to mainstream within the
organization from viewing these principles as cross-cutting activities of general relevance to a
common principle of action.

Including the HR perspective in evaluation means (i) addressing the process to people, (ii)
setting tools and approaches appropriate for collecting data from them; (iii) set-up processes
of broader involvement of stakeholders, and (iv) enhance access of the evaluation results to
all stakeholders.

19
The ILO Evaluation Management Handbook

3.2.2 Gender Equality and Evaluation


As stated by the ILO (2012), gender is conceived as a “socio-cultural variable that refers to
the comparative, relational and differential roles, responsibilities and activities of females and
males”. In the goal of guaranteeing an equal-balanced between women and men, gender
equality is applied to foster the enjoyment of equal rights, treatment and opportunities in all
spheres of women and men lives.

ILO has been promoting the equal sharing of power through the ILO Constitution, the ILO
Declaration on Fundamental Principles and Rights at Work, and the ILO Declaration on
Social Justice for a Fair Globalization, which enhance the obligation to take action towards
mainstreaming gender and eliminating discrimination in employment and occupation. The
Gender mainstreaming, conceived as a strategy to achieve the aim of gender equality, should
be used throughout the project’s lifecycle as required by the Governing Body discussion
concerning gender issues and technical cooperation (ILO, 2005).

In terms of evaluation, this implies (i) applying gender analysis by involving both men and
women in consultation and evaluation’s analysis, (ii) inclusion of data disaggregated by sex
and gender in the analysis and justification of project documents; (iii) the formulation of
gender-sensitive strategies and objectives and gender-specific indicators; (iv) inclusion of
qualitative methods and use of mix of methodologies, (v) forming a gender-balanced team,
and (vi) assessing outcomes to improve lives of women and men. All this information should
be accurately included in the ToR as well as the inception report and final evaluation report.

In order to fulfil an equity-focused evaluation, M&E supports the ILO’s two gender
mainstreaming components by systematically analysing which effects an intervention has on
women, men, on their relations and on the goal of creating more gender equality, and by
recommending actions to improve the effectiveness of an intervention to address the different
needs of women and men to contribute to greater gender equality. Additionally, ILO has
defined a Policy on Gender Equality and Mainstreaming that places the responsibility of
implementation in all the ILO staff at all levels, while accountability rests with senior
managers, regional directors and programme managers, as in the case of the evaluation
managers.

All specific information with regard to the ILO Policy can be found at:
- Circular No. 564, ILO policy on gender equality and mainstreaming (ILO, 1999);
- ILO, Guidance Note 4, Integrating Gender Equality in Monitoring and Evaluation of
Projects (ILO, 2012);
- ILO Plan for Gender Equality 2010-2015 (ILO, 2012);
- UNEG, 2012, Integrating Human Rights and Gender Equality in Evaluation -
Towards UNEG Guidance (UNEG, 2012).

20
Table 3. Integrating Human Rights and Gender Equality principles in evaluation based on the UNEG
Evaluation Appropriate evaluation approach to Human Rights (HR) and Gender Equality (GE)
Dimension
Project Background - The intervention theory identifies the problems and the needs of the particular groups.
- Records of implementation and activity reports contain information on how HR and GE were addressed.
- Both women and men stakeholders have participated in the activities on the intervention.
- Monitoring systems have captured HR and GE information (e.g. data disaggregated by gender, race, ethnicity, etc.)
reflecting the diversity of the stakeholders.
- The evaluation ToR includes the HR and GE information produced by the intervention.
Criteria and Relevance: assessing how the intervention’s design and implementation is aligned and contribute to HR and GE.
Questions Some examples are:
• Extent to which the intervention theory is aligned with the international/national/regional standards and
principles on HR and GE and how it contributes to their implementation.
• Extent to which the intervention theory is informed by tailored human rights and gender analyses to identify
underlying causes to HR and GE.
• Extent to which the intervention theory is informed by needs and interests of diverse groups of stakeholders
through consultations.
Effectiveness: the assessment of the objectives’ achievement takes into account the way in which the processes that lead
the implementation are aligned with HR and GE and how the results were defined, monitored and achieved (or not) on
HR and GE. Some aspects to consider are:
• Extent to which the project’s logical framework integrates HR and GE.
• Presence of specific key results on HR and GE.
Efficiency: the evaluation assesses the benefits and costs of integrating HR and GE in the intervention in terms of the
sustainability of the project. Some aspects to consider are:
• Analyses of the adequate resources investments in short-term, medium-term and long-term.
• Cost of not providing resources for integrating HR and GE.
• Extent to which the allocation of resources to targeted groups takes into account the need to prioritize the right-
holders.
Sustainability: the evaluation addresses the extent to which an intervention has advanced key factors to be place for the
long-term realization of HR and GE. Key aspects to include are:
• The intervention has developed an enabling environment for real change on HR and GE.

21
The ILO Evaluation Management Handbook

• Permanent and real attitudinal and behavioural change conductive to HR and GE.
• Capacity development of targeted right-holders and duty-bearers.
Impact: the evaluation assesses the long-term realization of HR and GE. The evaluation addresses:
• If the right-holders have been able to enjoy their rights and duty bearer have the ability to comply with their
obligations.
• Whether or not there is an empowerment of targeted groups and influence outside of the intervention’s targeted
group.
Indicators The evaluation should address the evaluation questions by applying indicators built on:
• Identified suitable indicators to specifically provide detailed, accurate and comprehensive picture of the progress
in terms of HR and GE.
• SMART principles that include HR and GE in a very clear manner.
• Indicators that clearly distinguish between the different beneficiaries among various variables (gender,
race/ethnic group, age, area of residence, disabilities, education level, etc.).
• Qualitative and quantitative measurements.
• Consultative process with the stakeholders.
Methodology The evaluation should apply the appropriate (ad hoc) methodology to address Human Rights and Gender Equality. The
key elements that need to be considered are:
• Stakeholders participation in the evaluation to avoid biases, such as gender biases, distance biases (favoring the
more accessible), power bias or class bias, with inclusion of the most vulnerable.
• Adequate sample (in case of larger groups) addressing the inclusion of women and men of the diverse
stakeholders groups.
• Mixed-methods: the evaluation should apply both quantitative and qualitative methodology to gather and to
analyse data and to offer different perspectives to the evaluation.
• Data disaggregation by the Human Rights applied criteria and GE approach.
• Triangulation: data from different sources are compared to confirm the inputs.
• Validation of the findings by enhancing workshops with different groups to increase the accuracy and reliability
of the findings.
Data Gathering The evaluation should apply different techniques to address human rights and gender equality. Some examples are:
Techniques • Desk review that look for specific information on HR and GE such as:
i. Evidence of HR and GE analysis at the design stage, including detailed and inclusive stakeholders
analysis.

22
The ILO Evaluation Management Handbook

ii. Analysis of the quality engagement and participation of the stakeholders in the various steps of the
implementation.
iii. Evidence on how HR and GE were addressed by the intervention.
• Focus groups disaggregated by gender, age, ethnic, and/or other required variables, built on:
i. The representation of the most vulnerable groups.
ii. Inclusion of questions directed addressed to assess the different views on HR and GE, etc.
• Interviews that reflects the diversity of the stakeholders of the intervention, that guarantee:
i. The representation of the worst-off groups.
ii. The understanding of how each interviewee is affected by HR and GE issues.
iii. Respect of confidentiality.
iv. Adequate understanding of the context.
• Surveys that include a sample that reflects the diversity of the stakeholders in the intervention and ensure their
inclusion in the evaluation through:
i. Paying particular attention to the format and language of the survey.
ii. Create different questionnaires for different stakeholders groups.
iii. Make sure that the survey includes specific HR and GE questions.
iv. Make sure that the survey contributes towards the understanding of how the respondents are affected by
HR and GE issues.

Reporting and An evaluation report and dissemination of the evaluation results that include a HR and GE perspective should be based
Dissemination on the following key elements:
The evaluation report
• Acknowledgment of how inclusive the stakeholders’ participation was during the evaluation process.
• Full coverage of HR and GE information based on the evaluation findings and validated conclusions.
• Recommendations on HR and GE based on the evaluation conclusions.
• Challenges to be addressed in future interventions concerning HR and GE.
• Lessons learned on HR and GE with regard to the intervention itself and the evaluation process.
The dissemination process
• Barrier-free provision of the evaluation products.
• Identification of the direct and indirect users of the evaluation and apply specific techniques to disseminate the
results.

23
The ILO Evaluation Management Handbook

4. The Evaluation Management: roles and responsibilities


Managing an evaluation implies a number of key steps that operate in accordance with the
evaluation process. General principles that need to be ensured while managing an evaluation
are focused on developing an evaluation process that is coherent with the participatory
requirements analysed in the three previous chapters, as well as the transparency and
independency principles to respect the stakeholder specific interests, and the rest of general
evaluation norms and standards applied at the UN System (UNEG, 2005).

The key steps while planning and managing evaluations are stated in the next graphic:
KEY STEPS IN PLANNING AND MANAGING EVALUATIONS

Each of these stages is developed by different ILO staff. A clear division of labour while
conducting an evaluation process contributes to ensuring that the ILO’s Guiding Principles
for evaluation are implemented. In the following sections, the duties and responsibilities of
the evaluation manager and the general staff involved in the evaluation process will be
examined and explanation of the different roles and responsibilities for each of them will be
given in the frame of the decentralized evaluations.

24
The ILO Evaluation Management Handbook

The Evaluation Manager

The evaluation manager is the ILO responsible for conducting and developing evaluation
processes in accordance with the ILO Policy for Evaluation, and the UNEG Norms and
Standards mentioned in previous sections of this Handbook.

The evaluation manager in the ILO is regular or project staff that volunteer to perform this
task in the interest of the organization, and should have no links to the project decision-
making. He or she is identified by a Regional Evaluation Officer (REO) or by a Sectoral
Evaluation Focal Point (SEFP), who will act as the focal point for the Evaluation Manager
and will provide guidance on policies, ethics and procedures. The sector or region decides on
the organization of the evaluation management functions. There can be more than one
evaluation manager per sector or region.

The specific role of the evaluation manager is to ensure that the evaluation process takes
place in a timely manner. In preparing for an independent evaluation, the evaluation manager
is required to:

• Carry-out the initial consultation with tripartite constituents, partners and stakeholders
in order to solicit their inputs regarding the:
o purpose and scope of the evaluation;
o criteria (e.g., relevance, efficiency, effectiveness, impact, and sustainability) on
which the evaluation should focus;
o specific questions regarding the criteria that the tripartite constituents, partners
and stakeholders would like the evaluation to answer;
• Prepare the draft Terms of Reference (ToR) and circulate them to tripartite
constituents, partners and stakeholders for feedback.
• Revise the draft ToR, obtain final approval from the Evaluation Focal Point and send
a copy of the approved ToR to EVAL for information.
• Search for prospective Evaluation Consultant(s), obtain approval for their recruitment
from the Evaluation Focal Point, and request a contract based on the ToR.
• Brief the Evaluation Consultant(s) on ILO policy to ensure a smooth evaluation
process.
• Involve tripartite constituents, partners and stakeholders in the entire process as
appropriate and ensure that gender issues are considered.
• Reviewing the inception report.
• Manage the process of preparing the evaluation report (including circulating the draft
report and collecting comments) and review the quality of the draft version of the
evaluation report.
• Submit the final evaluation report to the Regional evaluation officers or sector
Evaluation Focal Point for final review (EVAL provides final approval).
• Send the final reviewed and approved report to PARDEV for submission to the donor
and send copies to all other relevant evaluation stakeholders, including the key
national partners.

25
The ILO Evaluation Management Handbook

• Ensure proper follow-up to the recommendations and dissemination of lessons learned


within the ILO.

The Evaluation Focal Point in regional offices and technical sectors

The responsible evaluation focal point approves the final version of the ToR for proposed
independent evaluations and the choice of an external evaluation consultant. He or she may
also provide methodological input to the evaluation process, support evaluability studies or
scoping missions, and in the planning of evaluation for the region or sector.

The evaluation focal point works with the evaluation manager to facilitate access to regional
or sectoral consultant profiles for selection. When appropriate, he or she is responsible for the
approval of the consultant on behalf of EVAL.

At the end of an independent evaluation, the Evaluation Focal Point reviews and forwards the
final evaluation report to EVAL for approval, and uploads the evaluation process documents
into i-Track.

Evaluation oversight of administratively decentralized projects resides with the Evaluation


Focal Point in the regional office and, for centralized projects, with the Evaluation Focal
Point in the respective technical sector.

The Project Manager

The Project Manager and the project staff facilitate and support the implementation of the
evaluation by:

• Collecting information;
• Providing information and comments;
• Providing input to ToRs;
• Coordinating logistics of the evaluation team with the partners during the evaluation;
• Assisting in data collection;
• Participating in evaluation workshops;
• Providing input to the evaluation manager on the draft report; and
• Supporting the evaluators, administratively and logistically, as they conduct the
evaluation.

After the mid-term evaluation, the project manager is responsible for preparing a plan for
follow-up, taking appropriate action, and disseminating the evaluation outcomes together
with the ILO responsible official.

The ILO responsible official

The ILO responsible official is the main responsible of ensuring that sufficient funds are
secured for evaluations, and that an appropriate project monitoring and evaluation system is
established. The ILO responsible official and other project backstoppers are responsible of

26
The ILO Evaluation Management Handbook

the provision of appropriate technical and administrative support while the evaluation process
is conducted.

Once the evaluation is concluded, the ILO responsible official facilitates the follow-up to the
evaluation and makes sure that lessons learned inform the design of new projects. The ILO
responsible official also shares the responsibility with the technical backstoppers for wider
dissemination and knowledge sharing of the evaluation outcomes.

The ILO Evaluation Unit

The ILO Evaluation Unit (EVAL) is responsible of ensuring the quality and integrity of the
evaluation function in the ILO in accordance with the International Standards for Evaluation.
EVAL approves the final independent evaluation report after it has been reviewed by the
evaluation Focal Point.

EVAL is also in charge of the collects and stores the ToRs and evaluation reports of all
independent project evaluations, as well as the posts abstracts of the reports on its website
Further, EVAL is responsible for initiating the recommendation follow-up procedure for
independent evaluations.

Finally, EVAL provides guidance on procedures for good practice in evaluation planning and
conduct. In cases of disagreement, EVAL advises or mediates on issues related to evaluation
in collaboration with the evaluation focal point.

The Evaluation Consultant

In conformance with the ToR, the evaluator is responsible of collecting and analysing the
information to duly prepare the evaluation an evaluation report that includes the
recommendations based on findings, lessons learned and good practices. The evaluation team
leader of an independent evaluation is always the external evaluation consultant. The
evaluator should:

• adhere to internationally accepted good practices and solid ethical principles;


• be skilled in implementing diverse evaluation methodologies;
• ensure the evaluation is an inclusive and participatory learning exercise; and
• be culturally and gender-sensitive.

As it will be further explain in chapter eight, the evaluator reports to the evaluation manager
and submits the draft evaluation report and final evaluation report to the evaluation staff. In
finalizing the report, the evaluator should be receptive to comments from any of the
stakeholders concerning factual inaccuracies in the report while guarding their independence.
The evaluator has responsibility for the final content of the report and recommendations.

27
The ILO Evaluation Management Handbook

4.1 Fostering participatory processes with project staff and stakeholders


As participation is one of the guiding principles stated at the UNEG Norms and Standards for
evaluation (2005), key stakeholders should participate as early as possible in the planning
stage to create a common understanding about the purpose and use of the evaluation and the
approach to be taken (all included in the ToR). Maximizing participation in this stage of the
evaluation helps to ensure that the focus and methodology are appropriate and that the
chances for the stakeholders to use the evaluation findings can be increased by arising the
awareness and commitment of the stakeholders for the evaluation process. Additionally,
maximizing participation in the data collection phase should ensure that the evaluation team
registers all points of view. To this end, the start process of an evaluation should be closely
related to the principle of stakeholders’ participation, in accordance with the stated at the
UNEG standard 3.11: “Stakeholders should be consulted in the planning, design, conduct and
follow-up of evaluations” (UNEG, 2005). Through participation, therefore, ILO constituents
gain hands-on experience with evaluation and improve their know-how regarding its use.

According to the OECD (OECD, 2002), stakeholders are the agencies, organisations, groups
or individuals who have a direct or indirect interest in the development intervention or its
evaluation. In the ILO, the most important stakeholders are the constituents. Other ILO key
actors can be divided into two groups that are included in the table below:

Table 4. Key Stakeholders of the evaluation

Primary Stakeholders Other Stakeholders

Representatives of governments ILO HQ staff of cooperating departments


(e.g. ministries of labour)
Representatives of employer’s organizations ILO field staff

Representatives of worker’s organizations UN agencies in country

NGO’s

Other partners in country (e.g. donor agencies)

The various reasons to engage stakeholders in the evaluation are related to reducing the
impact of distrust and fear of evaluation, as well as increasing the stakeholders’ awareness of
and commitment for the evaluation process. The engagement and participation of the
stakeholders in the evaluation process seek also increase the stakeholders’ support of
evaluation efforts, programme advocacy, adherence to subsequent recommendations, as well
as increase the use of the evaluation findings and its credibility by avoiding the risks for the
evaluation findings to be ignored or resisted.

28
The ILO Evaluation Management Handbook

In accordance to the ILO Policy for Evaluation, stakeholders must be identified and consulted
at the early stage of the evaluation (when planning the key issues, method, timing and
responsibilities) and should be kept informed throughout the evaluation process. Further, the
primary stakeholders should be specified in the ToR, and it is the responsibility of the
evaluation manager to ensure that consultations with stakeholders take place. If the key
stakeholders are involved in obtaining answers to the questions they are interested in, then
they are more likely to implement the recommendations. Furthermore, the evaluation
approach must consider learning and participation opportunities (e.g. workshops, learning
groups, debriefing, participation in the field visits) to ensure that key stakeholders are fully
integrated into the evaluation learning process.

When feasible, the evaluation can include a core learning group composed of representatives
of the different stakeholders. This group has the role of facilitating and review the work of
the evaluation and can be tasked with facilitating the dissemination and application of the
results and other follow-up actions. Other forms of participation can be carried out during the
evaluation process as in the case of initial workshops with stakeholders or their core
members, individual or group interviews, as well as questionnaires.

Further information about the identification process of stakeholders can be consulted at the
Guidance Note 7. Stakeholder participation in the ILO (ILO, 2012).

5. Initial management of the evaluation


5.1 Appointment of the evaluation manager
Once the evaluation manager has been selected, the main starting activity is related to the
review of the project’s documents in order to become familiar with the scope and activities,
the log frame of the project, and the significant cross-cutting issues of the implementation
process. These review procedures should help the evaluation manager to become aware of the
project’s background, planned objectives, outcomes, outputs and activities, as well as
outcome indicators and assumptions in order to prepare the evaluation ToR. At this stage, the
evaluation manager should foster the projects’ evaluability assessment too. This evaluability
analysis would normally be undertaken in coordination with the Regional Evaluation Officer
or an evaluation expert on technical assessment for a comprehensive evaluation.

5.2 Designing a Terms of Reference


The result of the documents’ review and consultations with staff and stakeholders help the
evaluation manager to obtain relevant information to be included in the evaluation Terms of
Reference (ToR). Fostering a participatory process, the evaluation manager writes the ToR in
consultation with the project manager, ILO Office Director and line manager of the technical
unit backstopping the project.

The ToR is a critical document as it is a substantive part of the contractual basis for
undertaking an evaluation. It is the basic document used to define and select: the purpose and
scope of the evaluation; the methods to be used; the terms against which the evaluator’s
performance is assessed; how analyses will be conducted; the resources and time to be

29
The ILO Evaluation Management Handbook

allocated, as well as the general reporting requirements the ILO evaluation staff might
require. These ideas imply the specific inclusion of the reasons for the evaluation to be
conducted and a summary of the different stakeholders’ expectations of the evaluation.
Budgetary issues should be identified and must be included at the end of the ToR too. The
draft ToR should include also a work plan in order to fulfil the evaluation objectives. Further,
Human Rights and Gender approach must be appropriately linked into the evaluation ToR too
(see section 3.2).

Writing the ToR should be carried out with enough clarity and detail for both the ILO to
design well-focused evaluation processes capable of answering the organizational learning
and accountability goals, and the consultant to understand what is expected to be delivered
out of his/her work. For this reason, well-considered and well-written ToRs are the
foundation of a good evaluation. The content of the ToR should follow the outline indicated
below.

1. Introduction and rationale for evaluation

The introduction and rationale for the evaluation explains why the evaluation is being
conducted and what the expected outcomes should be. It should also indicate the type of
evaluation (i.e., self-evaluation, internal, independent or external). This section should
include reference to the country programme of which the project is a part and the specific
country programme outcomes to which the project contributes.

2. Brief background on project and context

Evaluations assess the relevance of the intervention objectives and approach establishing how
far the intervention has achieved its planned outcomes and objectives, the extent to which its
strategy has proven efficient and effective, and whether the project is likely to have a
sustainable impact.

As stated in the previous sections, the evaluation manager is responsible of leading the
process of understanding the background of the project due to the initial document review
planned at the early stage of the evaluation. This review contributes to build coherent
information of the project’s background that must be included at the ToR with regard to:

• Description of the history and current status of the intervention (e.g. duration,
location, budget, partners, donors and implementation phase);
• Summary of the intervention’s rationale, internal logic and strategy approach. This
implies a theory of change expressed in a logic model of planned objectives,
outcomes, outputs and activities, corresponding outcome indicators and assumptions.
The revision of the documents and consultations to the stakeholders that are key
sources of information concerning the intervention’s rationale:
• Brief description of how the intervention fits into the strategic frameworks and how it
links to the work of other partners at the country/regional level;
• Brief account of the intervention’s management set-up;

30
The ILO Evaluation Management Handbook

• Brief outline of economic, political, social, cultural and/or historical context of the
country/region, and how this might have influenced the intervention;
• Brief overview of the political, economic and social environment within which the
evaluation will be taking place; and
• Reference to any previous evaluations and reviews.

While defining the background of the project, the evaluation manager should foster the
analysis of the project’s context to be aware of the elements that might have any (positive or
negative) influence over the object been evaluated. Hence, the initial consultation with
project staff should be spent reviewing the evaluation context, going over questions or
clarifications about the project design and objectives. Some sample questions for reviewing
the project context are below:

Table 5: Guiding questions for reviewing the context of the evaluation


STEPS AREA SAMPLE GUIDING QUESTIONS
PHENOMENON -What is the problem the program is addressing?
-How did it emerge?
-For how long has it existed?
-What groups prompted concern about it?
-What tools exist for measuring it?
EVALUATION PLANNING

INTERVENTION -What are the different components of the project and how
do they fit in the broader environment?
-Who does the project serve?
ENVIORNMENT AROUND THE -What are the different elements of environment that affect
INTERVENTION and can be affected by the intervention?
-What aspects of these different climates are affecting the
design and operation of the program?
-Are there political or social views that influence the
program?
DECISION-MAKING ARENA -Who are the main decision makers/users of the evaluation?
-What are their views, values, about the program, and about
evaluation?
-What are the expectations of their organization?
-What are the expectations of the evaluation’s audience?
-What are the political expectations for evaluation?

Conner, R. F., Fitzpatrick, J.L., Rog, D.J., 2012, A first Step Forwards: Context Assessment, New Direction For Evaluation,
135, 89-105

3. Purpose, scope and clients of evaluation

Along with the definition of the background of the intervention, the evaluation ToR should
include the specific information about the audience, purpose and scope of the evaluation.

The purpose of an evaluation explains for what the evaluation findings will be used, such as
accountability, on-going improvement, organizational learning, etc. Usually evaluations in
the ILO have multiple purposes.

The scope sets boundaries around the object of the evaluation. It determines what is included
in the study, and what is excluded. Boundaries can be delimited by time, geography,
structure, or sequence.

When appropriate, consultation with the ILO’s primary stakeholders to determine the scope
of the evaluation is a good way to identify some of its key parameters, and raise interest in its

31
The ILO Evaluation Management Handbook

findings. The scope can be defined in terms of time and space (e.g. project start/end and
geographic areas of implementation) or by project phase or elements of a project.

When determining the purpose and scope of the evaluation, the manager should permanently
revise that the evaluation itself should be effective in delivering its purpose and efficient in its
use of time and financial resources.

The audience can be divided in both primary and secondary clients. Primary clients include
the standard tripartite constituents, partners and stakeholders – namely the project manager,
main national project partners, ILO field office director, technical backstoppers at
headquarters, field technical specialists, responsible evaluation focal points, and the donor.
Secondary clients such as the ILO’s Governing Body and other units within the ILO that
indirectly benefit from the knowledge generated by evaluations typically can be also part of
the audience of the evaluation.

General information to be included at the ToR can be framed in the following questions:
• Why the evaluation is being conducted and justification of its timing;
• Identification of the expected outcomes of the evaluation;
• Identification of the primary and secondary users of the evaluation;
• Brief statement of how the evaluation will be used;
• Specify the timeframe of the evaluation (geographical and thematic coverage, and
target groups to be considered);
• Aspects of the intervention that will not be covered in the evaluation;
• Integration of gender equality throughout its methodology and deliverables; and
• Specify particular issues that the evaluation should focus on.

4. Key evaluation questions

Each evaluation conducted by the ILO is expected to assess the key evaluation criteria
defined by OECD/DAC that are directly in line with the international standards of good
practices. These criteria are: relevance, effectiveness, efficiency, impact and sustainability.

The definition of the key evaluation criteria are listed in the table below:

Table 6: The evaluation criteria


Evaluation criteria Description
Relevance and strategic fit of the project The extent to which the objectives of a development intervention
are consistent with beneficiaries’ requirements, country needs,
global priorities and partners’ and donors’ policies.
Strategic fit refers to the extent to which the approach is in line
with the national development frameworks, UNDAF, DWCP,
SPF and P&B.
Validity of intervention design The extent to which the project design is logical and coherent.
Project progress and effectiveness The extent to which the project’s immediate objectives were
achieved, or are expected to be achieved, taking into account
their relative importance. This involves measuring change in the
observed output or outcome; attributing the observed change to
the project when possible; and assessing the value of the change,
whether positive or negative.
Efficiency of resource use A measure of how economically resources/inputs (funds,
expertise, time, etc.) are converted to results. This generally
requires comparing alternative approaches to achieving the same
outputs, to see whether the most efficient process has been

32
The ILO Evaluation Management Handbook

adopted. The ILO uses the efficiency evaluation criteria to


determine how economically resources or inputs (such as funds,
expertise, time) are converted to results.
Effectiveness of management The extent to which management capacities and arrangements
arrangements put in place support the achievement of results
Impact orientation and The strategic orientation of the project towards making a
significant contribution to broader, long-term, sustainable
development changes.

Sustainability of the project The extent to which the project has produced durable
interventions that can be maintained, or even scaled up and
replicated, within the local development context, or in the case
of a global project – sustainable as a global approach or policy.

When developing the analytical framework, the evaluation manager should consider the
priorities of the main stakeholders (preferably through an evaluation process) and address
specific issues that contribute towards the utility and feasibility of the intervention. Close
attention should be paid to the ILO’s mainstreamed principles overseen in the second section
of this Handbook, especially in terms of the main contents of the ILO Social Justice
Declaration for a Fair Globalization, poverty and rights, as well as ILO core mandates of
Tripartism, Human Rights and Gender.

In terms of the evaluation questions, the evaluation manager should include two or three
specific evaluation questions related to each criteria to guide the evaluation process on
important aspects and issues to be considered. While the evaluation criteria are fairly
standard, the evaluation questions should be tailored to the specifics of the project in order to
guarantee the validity of the evaluation, with the answers to these leading to
recommendations for guiding key decisions on further steps. Formulating the “right”
questions is one of the most important parts of the project evaluation process. This is because
the questions asked will determine the answers received.

In terms of designing the evaluation questions, there are various types of questions that can
be considered in order to measure the various aspects of the intervention:

Types of questions:
- Descriptive: What happened? (Ex: what, how, how much…)
- Normative: Is it good what happened? Was it supposed to happen? Has project
achieved its target, goal etc.?
- Causality: What evidence is there that what happened is a result of the project
intervention? (why)

The evaluation questions are addressed to answer several and different information needs.
Hence, they can be driven towards (i) the relevance and strategic fit of the program, (ii) the
validity of its design, (iii) the intervention progress and effectiveness, (iv) the efficiency of
resources, (v) the effectiveness of management, (vi) the impact orientation, as well as (vii) its
sustainability.

In summary, the basic content at the ToR should be related to:


• Reference to the evaluation criteria against which the intervention will be assessed
(e.g. relevance, effectiveness, efficiency, impact and/or sustainability);

33
The ILO Evaluation Management Handbook

• Reference to any additional criteria related to the particular type of evaluation being
undertaken, or specific to ILO’s mandate (e.g. cross-cutting issues of poverty, labour
standards, and social dialogue);
• Specific reference to gender issues;
• Listing of main evaluation questions related to the objectives of the evaluation and the
evaluation criteria; and
• Suggested analytical framework with sub-questions, adding further detail to the
objectives.

5. Methodology to be followed

Methodology refers to the type of activities used to collect information to answer the
evaluation questions. The evaluation manager is responsible for contributing to a definition of
the most efficient and effective methodology to address the purpose of the evaluation.
Common types of methods used in the ILO project evaluations are included in the table
below. Further information with regard to data collection approaches, tools and evaluation
methods can be seen at Annex A.

QUANTITATIVE QUALITATIVE
Survey Document analysis

Testing Interviews

Experiments Direct Observation

Quantitative approach is mainly used to respond to questions of quantity or frequency as they


tend to simplify the reality in the effort to provide objective and numeric data, whereas
qualitative methods are used to respond to questions designed for an in-depth understanding
of the situation as they are able to capture the differences to provide an holistic approach to
reality. The quantitative methods are therefore applied when a description is required. The
qualitative methods are utilized when an interpretation is sought.

General aspects of both methodologies are summarized in the following table:

Quantitative Approach Qualitative Approach


-Better structured -Less structured
-More precise responses -Able to reflect diversity
-Provision of numeric data -Provision of subjective data
-Based on standardised tools -Based on ad hoc tools to analyse of complex
and complicated situation
-Based on statistical methods -Designed to understand, not to prove cause-
effect
-The inclusion of a sample guarantee the -No generalisation can be done
comparability of results
-More expensive

34
The ILO Evaluation Management Handbook

The choice of methods depends upon many factors including the purpose of the evaluation,
the information needs, the sources of information, the complexity of the data collection
process and/or its frequency, the time for evaluation, and the budget. In order to guarantee the
rigor in the methodology, and the validity and reliability of the evaluation findings, the
evaluation manager should enhance an evaluation design (ToR) that includes:

I. Multiple and appropriate methods to generate useful findings to answer the evaluation
questions. General methodology applied at ILO evaluations is:
• Quantitative: applied when referring to attribution evidences. Experimental
and quasi-experimental methods are mostly used in this case.
• Qualitative: applied when contribution findings are expected to answer some
of the evaluation questions. Questions towards what, why or how, are the most
applied in this situation.

II. Data collection should be gathered through multiple perspectives (several data sources)
and sex-disaggregated data. Standard information sources in the ILO are:
• Primary data sources: information observed and collected directly from
stakeholders (e.g. surveys, focus group discussions, etc.)
• Secondary data: data collected by purposes other than those of the evaluation
(e.g. reports, previous reviews, other evaluations, etc.)

III. Data triangulation: OECD/DAC (OECD, 2002) defines triangulation as the use of three or
more theories, sources of information, or types of analysis to verify and substantiate an
assessment. This process implies compare, verify and cross-validate data to guarantee that
inferences and conclusions are reasonable and justifiable though a consistent measurement
process that used different methodologies and/or different sources. The technique of
triangulation allows evaluators to overcome the bias that comes from single information
sources, and the use of single methods or single observations. Hence triangulation strengthens
the accuracy, robustness and reliability of evaluation results.

Broader information about the validity and reliability requirements of a ToR can be consulted
at the Guidance Note 8. Ratings in Evaluations (ILO, 2013)

Planning the methodology to be used during the evaluation and including it in the ToR
ensures transparency. The evaluator may adapt the methodology, but any changes to it should
be agreed between the evaluation manager and the evaluator.

The main contents of the ToR with regard to the methodology are related to:

• Identification of information needs and possible sources of information, based on an


assessment of evaluability;
• Description of the suggested methodological approach and design for the evaluation;
• Description of multiple methods, disaggregated data and triangulation to address
pertinent questions;
• Evaluation methodology and subsequent analysis explicitly addressed to assess
gender issues;
• Clear statement of the boundaries of the chosen evaluation methods;
• Identification of conditions and capacities needed to support data collection, analysis
and communication;

35
The ILO Evaluation Management Handbook

• Plan for data analysis;


• Description of the involvement of the key stakeholders in the implementation of the
evaluation.

6. Main outputs: inception report, draft and final evaluation report

Main outputs refers to the deliverables that the evaluator is obliged to submit. Outputs can
include verbal and written reports, executive summaries, press releases, photographic
compilations, and audio or video documentation.

The choice of outputs depends on many factors including target audience and budget. If a
certain format or layout of the evaluation report or other outputs is required, these should also
be specified here. It should identify:
• the main outputs of the evaluation;
• how and when they will be delivered.

The evaluation manager should guarantee that the ToR includes:


• Specific information about the main outputs if the evaluation;
• Statement that the quality of the report with regard to EVAL Checklist; and
• Language, format, length and structure of the deliverables.

7. Management arrangements, work plan, formatting requirements and time frame

In addition to the evaluation manager’s contribution towards the achievement of a well-


focused ToR, she or he should also determine:

• the support needed from the ILO at Headquarters, regional, sub-regional and country
levels for implementing the evaluation;
• the composition of and reporting lines within the team, where there is more than one
evaluator;
• a detailed work plan, stipulating each partner’s contribution to the evaluation;
• the process of circulation of the draft report;
• the time frame, with deadlines for each major step in the process; and
• the budget for the evaluation.

All the above mentioned information might be included in the ToR as follows:

A. Description of the key stages of the evaluation process and an indicative time frame,
including milestones and deadlines;
B. Specific reporting lines;
C. Specific desired competencies of the evaluator and the scope of work (e.g.
international/local, gender balance.)
D. Description of the roles and responsibilities for the evaluation team, stakeholders and
partners; and
E. The support needed from the ILO at Headquarters, regional, sub-regional and country
levels for implementing the evaluation.

36
The ILO Evaluation Management Handbook

Further information and more detailed guidance for the content of the ToR are also provided
in the links below:

Checklist No. 1 Writing the terms of reference

Checklist No. 2 Rating terms of reference quality

Checklist No. 4 Validating methodologies

Guidance Note. 4 Integrating gender equality in monitoring and evaluation of


projects

5.3 Preparing and approving a Terms of Reference


The preparation of the ToR starts with an initial consultation carried out by the evaluation
manager with the tripartite constituents, partners and the main stakeholders of the evaluation
such as the project manager, main national project partners, ILO field office director,
technical backstopper at headquarters, field technical specialists, responsible evaluation focal
points, and the donor, if required. This consultation fosters the stakeholders’ provision of
inputs regarding the evaluation’s purpose, criteria, and questions. Based upon these inputs,
the evaluation manager prepares the draft ToR, which will be circulated for comments to the
same tripartite constituents, partners and stakeholders that provided inputs in the early
consultation stage. Hence, the evaluation manager consults with, and receives input from the
following key stakeholders who provide comments within a specified time span:

• Project or programme manager and key staff;


• Global, regional and/or national constituents, as appropriate;
• Main global and national partners;
• ILO Field Office Director;
• Technical backstopper at Headquarters;
• Field technical specialist;
• Responsible evaluation focal point; and
• Donor, if required (not for RBSA, RBTC, DWCP or thematic evaluations).

Once the evaluation manager receives the comments from the stakeholders, s/he is
responsible for their integration into the draft ToR, as appropriate, and passes the ToR to the
responsible evaluation focal point for approval. Copies of the final ToR are sent to the same
group of stakeholders who provided comments on the draft. The final ToR is uploaded into
the i-Track evaluation planning record and copies are then sent to the tripartite constituents,
partners and stakeholders.

37
The ILO Evaluation Management Handbook

5.4Additional preparations for an evaluation


Additional preparations for conducting an evaluation are related to the four main aspects of
funds, time/schedule, information and people.

As stated in the early chapters of the Handbook, the Results-Based Management applied in
the ILO helps to ensure an evaluation plan in which each activity is based on the resources,
duties and schedule required for its effective achievement. Hence, one of the main roles for
the evaluation manager is to ensure an equitable provision of the resources (e.g. time, money,
information and work team) among the evaluation team that should be agreed upon between
the project manager and the evaluation manager. General terms for organizing these elements
are related to the common tables for organizing resources, responsibilities and outcomes. A
table commonly applied to plan the team work to pursue the goals of an evaluation process is
presented below. Additional information about each of the resources is explained in the
following sub-sections.

Table 7: Planning for an evaluation

Activity Responsible Time Main Resources Outcome

5.4.1 The evaluation budget


Evaluations at the ILO are financed from the budgets of the programmes or projects. ILO
Evaluation Policy establishes that a minimum of two per cent of the total project funds should
be reserved for independent evaluations, which should be assigned to budget line 16.50. In
contrast to other budget lines, use of the resources under budget line 16.50 requires the
approval of the evaluation manager and an ILO evaluation official, but not the project
manager.

Project budget line 16 should reserve adequate resources to cover monitoring and evaluation
activities for all phases of the project or programme, with inclusion of gathering the baseline
data and the development of monitoring and evaluation plan, as well as the end of phase
evaluations, and the end of programme evaluation to assess results and impact. Even though
internal evaluations may not require extra staff costs, they should be scheduled and budgeted
for, as they may involve additional travel costs or workshop costs for consulting partners.

Plus a recommended 3% (all together


5%) of total project resources should be
reserved for monitoring, review and
internal evaluation.

Min. 2%
of total project resources should be
reserved for independent evaluations. 38
The ILO Evaluation Management Handbook

The ILO responsible official is in charge of ensuring that an adequate budget exists to
implement the evaluation plan which should be indicated in the original project proposal.
Table 8 contains a template for computing the evaluation budget.

Table 8. Template for computing the evaluation budget

Task Responsible Dates Fee DSA Int’l Local Other Source


Days Rate $ Days Rate $ Travel Travel
Scoping
Mission1
Desk
Review
Field
Mission
Report
Translation
Workshop
Total

Further, six steps should be followed to calculate the evaluation budget:

• Calculation of number of the consultant’s working days.


• Determination of the consultant’s level according to expertise and experience.
• Calculation of travel costs, including travel days, and vehicle use for field trips.
• Calculation for the data collection days, either primary or secondary data.
• Calculation of accommodation and DSA costs: in addition to fees, the ILO pays for
travel and a Daily Subsistence Allowance (DSA). DSA is based on rates published by
the International Civil Service Commission. Travel and DSA are often paid in one
lump sum to the consultant who is then responsible for booking tickets. ILO does not
pay business class travel for consultants.
• Calculation of any additional costs (interpretation services, workshop facilities for
focus group and stakeholder meetings, etc.).

5.4.2 Evaluation Schedule


Defining the evaluation schedule should contain:

• A Work Breakdown Structure (WBS) to determine the level of detail required to


develop the evaluation schedule. The WBS set out the activities, sub-activities and
tasks needed to fulfil the outputs of a project or programme;

1
This is duly computed in the case of high level evaluation, such as cluster evaluations.

39
The ILO Evaluation Management Handbook

• Identify the sequence in which the tasks should be performed and rearrange the order
of the rows to reflect the sequence;
• Estimate how much time is likely to elapse for the performance of the activity in
question;

An evaluation schedule based on a WBS to organize the achievement of each of the outputs is
presented as follows:

WBS Output 1 Schedule

Activity Task Jan Feb Mar Apr May Jun Jul Aug Sep Oct

1.1 1.1.1

1.1.2

1.1.3

1.2 1.2.1

1.2.2

1.3 1.3.1

Additional information about an implementation planning based on the Work Breakdown


Structure, as well as the strategy for its design can be consulted at the ILO-PARDEV
Technical Cooperation Manual (ILO, 2010)

5.4.3 Information
As stated in initial chapters of the Handbook, a desk review of all the key documents of the
evaluated project should be conducted at the early stage of the evaluation process. The results
of the desk review will inform the methodological approach to the evaluation and will ensure
the utilization of specific evaluation techniques to both evaluate and report the evaluation
findings. The list of important documents to include in the evaluation process is:

• National development frameworks


• UN Development Action Framework
• Poverty Reduction Strategy Papers
• Decent Work Country Programme Documents
• Strategic Programme Framework and Programme & Budget
• Baseline reports and information

40
The ILO Evaluation Management Handbook

• Monitoring reports
• Progress reports and status reports
• Previous evaluation reports
• Other studies and research undertaken
• Technical and financial report of partner agencies
• National workshop proceedings or summaries
• Beneficiary records.

5.4.4 The Evaluation Consultant


While the draft ToR is being circulated, the evaluation manager initiates the search for a
suitable consultant to conduct the evaluation. The evaluation manager, together with project
staff and stakeholders, should identify the consultant evaluation skills. An open selection
process will be conducted to select a suitable consultant. To this end the evaluation manager
should pay attention to various factors while selecting a consultant. Budget, conflict of
interest (no previous links or current link to the object been evaluated and no personal
relationship to the people who manage the project/programme), background and training,
evaluation experience, professional and methodological orientation and personal code of
conduct are all relevant issues to consider in order to select the most suitable evaluation
consultant or team in terms of the evaluation goals.

One of the most effective, efficient and transparent ways of identifying an evaluator is by
placing a public advertisement or call for expressions of interest on the various evaluation
electronic mailing lists and networks for them to be placed. There are also global and
regional online evaluation networks where a call for expressions of interest can be posted to
solicit CVs from qualified experts.

If stakeholders want to make a general call for CVs in order to develop a pool of evaluation
consultants with specific technical expertise, a dedicated email address for that purpose can
be advisable. The calls for consultant normally include: 1.the organizational unit responsible
for hiring, 2. key details on the assignment, 3. the planned starting date and duration of the
evaluation, 4. the core requirements, and 5. the language of the report and a contact email.

The general applications received in response to a call for expressions of interest usually
include qualified and experienced evaluators as well as others with less experience. In order
to assist the evaluation managers in selecting an appropriate consultant, EVAL has developed
a checklist for appraising potential evaluators based on ILO and UNEG criteria.

Once a consultant has been identified, the evaluation manager is strongly urged to undertake
due diligence in checking the references of consultants as a record in that database and
previous employment with ILO. If a consultant has conducted an evaluation for the ILO, the
record will cite a project code that can be searched on the EVAL public website. Summaries
of the work are listed by year. Full reports are available upon request.

41
The ILO Evaluation Management Handbook

5.4.4.1 The specific process of selecting a consultant


The evaluation manager is the ILO staff responsible for proposing the evaluation consultant,
with input from the project manager, although suggestions from the various stakeholders can
also be considered. The consultant’s name, CV and credentials are circulated to the main
stakeholders of the intervention, namely project manager, main national project partners, ILO
Field Office Director, technical backstopper at HQ, field technical specialist, responsible
evaluation focal point, and possibly the donor. Once the evaluation manager receives from
them confirmation of a selection of a short-listed consultant, the evaluation manager is
responsible for justifying the selection. Then, the responsible evaluation focal point is in
charge of approving the selection of the chosen consultant considering any serious
reservations from stakeholders. Once the consultant has been approved, the evaluation
manager hires the consultant.

5.4.4.2. ILO Contracts


The evaluation manager may (through the project manager or concerned field office or
department) issue either a Service Contract for consulting companies or an External
Collaborator (Ex-Col) Contract for individuals.

Service Contracts is used for high budget evaluations and are awarded on the basis of an
effective competition. In the case of requests for quotations and invitations to bid, contracts
are awarded to the best qualified vendor submitting the most technically acceptable and
lowest quotation or bid. In the case of requests for proposals, contracts are awarded to the
qualified vendor whose proposal is considered to be the best value (technical and financial)
and the most responsive to the needs of the ILO.

According to ILO Circular No. 630 on the Inappropriate Use of Employment Contracts in the
Office, the rules regarding competition do not apply to external collaboration contracts or
implementation agreements for the delivery of technical assistance.

An External Collaboration Contract (Ex-Col) is task-based and is used for smaller budget
evaluations. Such a contract recognizes the external collaborator as an individual working in
the ILO but not a staff member or ILO official. This type of contract may be used only where
there is a specific well-defined task to be performed and the output can be considered as a
specific end product (e.g. a research study, report, translation, or typed document) or where
the task assigned is one that is advisory in nature (e.g. engaging an academic or other
specialist to present a paper and be a discussant at a workshop). The conditions under which
Ex-Col contracts are allowed are as follows:

• The work to be carried out is not an on-going activity but time-bound to the ToR;
• The work performed must meet a specified deadline at working times determined by
the consultant within the overall work plan set by the ToR; or
• The office space, facilities, or services normally should not be provided; and full
payment is made only when work has been judged satisfactory (by EVAL in the
Headquarters).

42
The ILO Evaluation Management Handbook

6. Initial briefings: management of the evaluation work plan


The initial duty for an evaluation manager after hiring the consultant is to brief him or her.
The start point takes place on the first meetings with the consultant at which the evaluation
manager will verify that all documents have been provided to the consultant. The external
evaluator might have already received substantial documentation at the initial briefing, and
should also have been briefed on how to select and contact participants in the evaluation
(including having letters of introduction). Nevertheless, the evaluation manager should
provide sufficient and timely access to the appropriate documentation and people.
Furthermore, the ILO project staff should assist the evaluation manager in arranging adequate
access to interviews and facilities to host the consultant’s work.

Specific documents that need to be delivered at the first meeting with the consultant are:

• Evaluation Contract;
• Project documents (e.g. baseline report, monitoring reports, previous evaluation
reports, etc.);
• ILO or National Documentation
• List of the individual contact details pertinent to the evaluation;
• Terms of Reference, Work Plan and Schedule for deliverables;
• EVAL Guidance documents, checklists and templates for the evaluator; and
• Code of Conduct for Evaluation in the UN System.

All key documents related to the evaluation are included in the ILO Checklist 10 Documents
for the Evaluator. These items supplied to the consultant should be checked off, and any
additional documents supplied should be written on the form. The consultant should initial
and date the document related to the delivered documents to confirm receipt. These
documents should be forwarded to the evaluation focal point. These documents are then
uploaded by the focal point into the ILO I-track database to ensure that there is a central
repository for process documents.

After the initial briefing, the main duty for the evaluation manager is to ensure that the
consultant stays on schedule with the first deliverable: the inception report. Further, the
approval of the inception report by the evaluation manager constitutes an acceptance by the
ILO of the results generated through the proposed methodology. Hence, it is important for the
evaluation manager to carefully revise the inception report in order to check the interview
lists and guides, questionnaires and sampling, among others, to avoid any possible biases and
distortion of the results. Those reviewing should also check that methods are drawn on both
subjective and objective sources of data to provide a balanced and insightful report.

The inception report should be shared with key stakeholders for their information and
comment. Further information about a well-focused inception report that meets ILO quality
standards can be found in Checklist 3. Writing the inception report (ILO, 2013).

Along with the above mentioned, it will be the evaluation managers’ responsibility to see that
the work plan keeps to schedule, the access to information is on-going, and the provision of

43
The ILO Evaluation Management Handbook

the draft report and final reports conform to the evaluation ToR. In doing so, the evaluation
manager should also check the integration of the ILO Principles for evaluation into the
process of data gathering and analyses.

Finally, the evaluation manager should bring to the attention of the regional or sectoral
evaluation focal point any substantial problems that might have appeared while managing the
consultant’s work. Ethical issues may endanger the independence of the evaluation and
timing is usually a critical issue for these short contracts, especially when a project is coming
to its end. Hence, the evaluation manager must ensure throughout the evaluation process that
any ethical or other issues posing a problem for the completion of the evaluation are brought
to the attention of the REO or SEFP.

7. Team Management
While managing the development of an evaluation process, the evaluation manager is the
main responsible of overseeing and achieving an effective work environment. All evaluations
in the ILO are conducted by technical experts and evaluators that should be adhered and
respect the technical and ethical work standards, as well as the main criteria of
professionalism, impartiality and credibility.

Once the evaluation process starts, the evaluation manager is responsible for encouraging
internal communication among the evaluation team. The evaluation manager is responsible
for promoting participatory processes that lead to the achievement of the common-value
goals. These processes are carried out through collaborative negotiation procedures that
contribute to achieve win-win outcomes.

General principles achieved through collaborative negotiation procedures are:

1. Separate people from the problem


2. Focus on interests, not positions
3. Create WIN-WIN options
4. Frame agreements using objective criteria

Further, common tools to achieve agreements through a participatory process are:

• Tools for generating opinion and agreements: Structured Rounds, T-Charts, etc.
• Tools for evaluating: Multi-Voting, Force-Field Analysis, etc.

The evaluation manager should foster critical reflexive processes throughout the whole
evaluation process. This communication can be conducted in different ways, as through
meetings or conducting workshops capable of fostering the sharing of ideas, improvement of
the current evaluation process, building team capacities and fostering the organizational
learning. While using the critical reflection strategies, the evaluation manager should
encourage participatory and inclusive processes with close attention to the achievement of the
following principles:

44
The ILO Evaluation Management Handbook

• Creation and supervision of the quality and functionality of the regular information
flow between the team;
• Quality guarantee of the internal mechanisms for feedback;
• Provision of the required technical information for the evaluation to be developed;
and
• Guarantee empathy and non-judgemental communication processes, conducted
through effective listening channels (though been understood and to understand).

Closely related to the ideas previously drawn, the evaluation manager is responsible for
fostering motivation among the individual members and the evaluation team. There are two
types of basic motivation:

• Extrinsic motivation: money


• Intrinsic motivation:
- Through developing a sense of responsibility: fostering both a sense of
autonomy and a sense of team accountability.
- Through developing knowledge of the results: awareness of the relevance of
the on-going feedback.
- Through a sense of meaningfulness: awareness of the work contribution to
society.

Closely related to the motivational processes, the evaluation manager should work on
developing coaching processes that benefit the evaluation team. These can be enhanced in
four different stages (the Grow Method) of the individual work process:

1. Identification and clear definition of the inner goals:

• What would you/we like to achieve during your work?


• How would be a useful way of spending this hour?

2. Identification and definition of the current achieved level:

• What have you tried to achieve the goals?


• Where could you go to get more information?

3. Definition of the potential options to be achieved:

• What else could you/we do?


• If you were to ask a colleague, what might he/she suggest?

4. Clear and realistic identification of the new work options:

• What are you/we going to do?


• What support do you/we need?

In addition to the above-mentioned tasks, the evaluation manager is the head responsible for
overseeing the quality of the work done while the evaluation process is been developed. It

45
The ILO Evaluation Management Handbook

implies a continual review of the work in terms of its coherence with what was initially
established by the evaluation team (including the consultant); if the work is been developed
on schedule; if the available resources are enough; and if the internal procedures, processes,
and communication are working effectively.

A final responsibility for the evaluation manager to ensure a good team work environment is
the internal process of problem solving. The evaluation manager should act as a facilitator
and in doing so he or she should develop different levels of managing intergroup conflicts by:

i. Identifying the conflicting groups and encouraging face-to-face meetings with their
members, raising the issue of the conflict and fostering dialogue to discover mutual
interests.
ii. Generate options and develop agreements by encouraging the search of collaborative
and integrative solutions that emphasize common interests and de-emphasize
differences.

8. Drawing conclusions: the evaluation report


8.1 Verbal evaluation report
The evaluation consultant should verbally communicate the main findings and
recommendations to the main stakeholders at the end of the evaluation process. To this end, a
workshop or meeting is arranged by the evaluation manager. As appropriate, the evaluation
manager is responsible of ensuring that the draft report is seen by all key stakeholders of the
evaluation, such as the tripartite constituents, partners, project management, main national
project partners, ILO field office director, technical back stopper at Headquarters, field
technical specialists, responsible evaluation focal points, and the donor. Now is the time for
the consultant to present the draft report and facilitate a further discussion concerning the
evaluation main findings and conclusions. After the meeting, the evaluation manager solicits
and consolidates comments received from these stakeholders and forwards it back to the
evaluation consultant.

The workshop or meeting is jointly arranged by the evaluation manager, project management
and the evaluation consultant. It provides them with a preliminary view of the consultant’s
findings and recommendations upon which they can comment before the report is finalized.
The main goal is to allow the stakeholders to understand that their input was taken into
account in the evaluation process as well as increase the chances of the stakeholders to
support the evaluation results, advocate the programme and adhere to the recommendations.

The evaluator should also give the project management a debriefing focusing on the
methodological framework that has been applied to conduct the evaluation, as well as the
evaluation processes carried out, including suggestions on how the evaluation support can be
improved. Such a debriefing can be arranged through a meeting, by telephone or by email.

When appropriate, similar feedback should also be given to the ILO responsible official as
this person has line management responsibility for the project implementation. It can also be

46
The ILO Evaluation Management Handbook

beneficial for the evaluation team to present the main evaluation findings to the technical
department, and other ILO officials, focusing on lessons learned and good practices.

8.2 Written evaluation report


The evaluation consultant is expected to complete the draft and the final evaluation report
according to the ToR. As per the draft evaluation report, the evaluation consultant should
submit a complete and readable draft report to the evaluation manager. The evaluation
manager is then responsible for checking the quality of the draft report in terms of adequacy
and readability.

The quality review of the draft evaluation report should be focused on two different aspects:
the inclusion of the precise structure of the evaluation report and the quality of its content.
Further information with regard to these two different areas is presented in the tables below.

Table A. Main structure of the evaluation report:


• Cover Page with key intervention and evaluation data
• Executive Summary
• Brief background
• Purpose, Scope and Clients of evaluation
• Methodology
• Review of implementation
• Presentation of findings
• Conclusions
• Recommendations
• Lessons Learned and Good Practices
Annexes ; ToR, Questionnaires, list of informants, etc.
Further information: Checklist 5. Preparing the evaluation report (ILO, 2013)

Table B. Quality and Stringency of the contents of the draft report:

A. Rigor in the method


All the criteria are addressed and connections to the evaluation questions and
applied indicators and methodologies are specified;
Validity and relevance of methodology and instruments are justified in the
evaluation report;
Triangulation to verify the accuracy is applied and used of mix-methods are
included in the evaluation;
Validation of the information is concluded through the stakeholder’s participation;
Bias in methods is recognized and managed in the evaluation report in order to
decrease the effects in the conclusions;
Threats to validity have been considered during the evaluation and a clear
statement of this process is included at the report;
The evaluation’s limitations on the overall judgment of the programme have been
estimated and reported.

47
The ILO Evaluation Management Handbook

B. Quality of the data

Data collection instruments have been described and documented;


The sources of information used have been described and justified in terms of
answering the evaluation questions;
Information has been obtained from a variety of sources;
Any biasing feature in the obtained information has been identified and tackled to
avoid effects into the evaluation findings;
Multiple analytic procedures to check on consistency and replicability of findings
from quantitative data have been applied
- Identification and analysis of statistical interactions,
- Assessment of statistical significance and practical significance;
Analytic content procedures and methods of summarization of qualitative data
have been applied
- Derivation of conclusions and recommendations and demonstration of their meaningfulness.

C. Credible Evaluation Findings

The main evaluation questions have been identified and linked to the methodology
chosen;
All evaluation questions have been addressed or an explanation has been given
when questions could not be answered;
Findings are relevant to the purpose and scope of the evaluation;
Findings are supported by the evidence presented and are consistent with methods
and data;
A thorough account of the evaluations process has been included in the report;
Plausible alternative explanations of the findings have been reported.
The unintended outcomes have been discussed in alignment with the evaluation
findings.

D. Valid Evaluation Conclusions

The conclusions have been focused directly on the evaluation questions and answer
the established criteria of merit;
Conclusions to the applicable time periods, contexts, purposes, and activities have
been limited and construct standards to perform the project;
Conclusions are followed from a synthesis and integration of data into a judgment
of merit and worth, measuring the performance and comparing it with the
standards;
The information that supports each conclusion has been cited;
Judgments are fair, impartial, and consistent with the findings;
When interpreting findings, different stakeholders’ intended uses of the evaluation
have been taken into account;
Inferences singly and in combination have been justified;
Once theTheevaluation draftside
program’s report has been
effects have reviewed, the evaluation
been identified managertocirculates
and considered define thethe
report toevaluation
the stakeholders
conclusions.for feedback in preparation of the final evaluation report.

48
The ILO Evaluation Management Handbook

Stakeholders are encouraged to make written comments but not to edit the document directly.
Comments may be sent individually to the evaluation manager on a confidential basis, and/or
collectively from tripartite constituents, partners and stakeholders. The evaluation manager
compiles the comments received and forwards them in a single communication to the
evaluator. The evaluator incorporates them as appropriate and submits the final report to the
evaluation manager.

8.3 Approving the evaluation report


The evaluation manager reviews the report for adherence to the ToR and correct formatting
and presentation of recommendations and lessons learned. Once the report is deemed
complete, the evaluation manager completes and adds the official EVAL title page (ILO,
2013) to the report. It is also the responsibility for the evaluation manager to complete the
template for the evaluation summary (ILO, 2013), as it is the key document which is widely
circulated on the web and should be completed according to the guidance provided. The
evaluation report is then forwarded to the REO or SEFP for approval. If the Regional M&E
Officer does not approve a report, it is returned to the evaluation manager and consultant for
correction. If all quality criteria are met, the Regional M&E Officer will approve the report
and forwards it to EVAL.

The full set of documents submitted to EVAL for approval should include the:

- Evaluation Report with an EVAL title page- with the templates for lessons learned
and emerging good practices attached - Word version
- Any supplemental appendices to the report can be put into a zip file and need not be
in Word format
- Evaluation summary, using EVAL template (should be in language of report) – Word
version
- Any other language versions of the report should also be submitted in Word
- CV of consultant (if not already submitted)
- EVAL Approval form (to be filled in by the REO or the DEFP).

The EVAL approval form is filled in by the REO or designated DEFP, for several specific
reasons. It acts as a final control of quality before coming to the Evaluation Unit to ensure
that it has been approved for submission to EVAL HQ, and that all critical documents for the
evaluation is contained in the submitted documents. The form:

- presents updated details on partners in joint project evaluations, if applicable;


- indicates when an evaluation is externally managed, if applicable;
- requires that evaluation recommendations which are targeted at constituents are
highlighted and rated (annual data collection for this is presented in the Annual
Evaluation Report by EVAL staff); and
- identifies evaluations which have RBSA components (tracked and reported on by
EVAL HQ).

49
The ILO Evaluation Management Handbook

The Evaluation Unit then controls the report for quality. If the report needs some adjustments
it is sent back – through the REO or DEFP – to the evaluation manager and consultant for
changes.

If the evaluation report is approved by EVAL, the evaluation manager submits the evaluation
report to PARDEV, to the Country Directors, to the DWT Directors, to the Regional Officer,
to the Evaluation Coordinator and to the relevant stakeholders. Finally, the evaluation
manager is allowed to authorize payment to the consultant once he or she receives word that
EVAL has approved the evaluation report.

Approval Workflow

50
The ILO Evaluation Management Handbook

9. Dissemination of the Evaluation Report


One of the goals of the evaluation process is to translate the main findings and the
recommendations into action. In the frame of the Results-Based Management approach, the
outcome of the evaluation process should enable the project managers and partners to take
informed decisions to better improve the project. Hence, the recommendations, the lessons
learned and the good practices should be made accessible to interested parties to facilitate
organizational learning and improve the project design and implementation. This process of
dissemination should be done in close accordance with the ILO policy on public information
disclosure (ILO, 2012).

The written evaluation report should be disseminated to tripartite constituents, partners and
stakeholders. The evaluation manager will then formally submit the report to the same
stakeholders who provided comments on the draft report as well as additional stakeholders
identified throughout the process or the stakeholders suggested by project management.

Furthermore, the evaluation reports should be stored in a systematic manner and the
knowledge generated in them should be systematically fed into the design of new projects or
the next phase of a project. EVAL has developed an electronic tool of great value for storing
information from projects evaluations.

• I-Track: It is a web-based electronic system with a multi-lingual information


management system that facilitates online file storage. The files and other information
can be easily uploaded by any user. This provides direct access over the Internet to all
ILO staff anywhere in the world. i-Track consists of the two following modules:
DocuTrack and EvalTrack.

• DocuTrack: is conceived as a file management module of i-Track. It stores all types


of internal documentation using meta-data that allows ILO staff to upload and access
files from the Web. Its application has an easy-to-use web-based user interface.
Information can be easily exchanged between staff members. In addition, it is
extendable to all regions for internal input and external access.

• EvalTrack: is a module designed particularly for evaluation management by the ILO’s


Evaluation Unit. It thoroughly supports the tracking of the evaluation workflow and
the management of all documents pertaining to the technical work of the ILO. These
include technical cooperation project documents, DWCP documents, and various
types of studies (for example, mid-term evaluation, final evaluation, external review,
internal review) of both projects and programmes.

All the dissemination strategies should be formal outreach processes that are comprehensive
and systematic in order to increase the likelihood of the potential users to absorb emerging
good practices and lessons learned. This audience may include constituents, UN partners,
national partners, international partners, beneficiaries of development support, and the wider

51
The ILO Evaluation Management Handbook

global community. In any case, the report should be published in the local language if
possible and depending on budget.

In addition, the project manager, the ILO responsible official, the evaluation manager and the
evaluation focal point are encouraged to disseminate the executive summary of the evaluation
report to other interested individuals inside and outside the Office. The relevant technical
specialists in Headquarters and the field should also make an effort to disseminate relevant
lessons learned to interested officials in the Office. This can be done through diverse means
such as formal or informal meetings; discussion groups; newsletters; information briefs on
websites; annual evaluation report compilations; and/or intranet, weblogs, and communities
of practice.

10. Enhancing the use of evaluations


The final action on an evaluation process is the initiation of the management response follow-
up to recommendations. As evaluations only lead to organizational improvements if
recommendations are given systematic follow-up by line management, this stage helps to
strengthen the use of the evaluation findings, the promotion of the organizational learning and
accountability for the evaluation results. This process thereby contributes towards the better
improvement of programmes and projects’ design and delivery, fostering the goals of the
Results-Based Management Approach.

Further information is displayed in Guidance Note 15. Management follow-up for


independent project evaluation (ILO, 2013).

11. Ethic and cultural sensitivity at the evaluation management level


The evaluation manager should ensure that the evaluation process is been conducted with a
special consideration of ethics, respect for human rights and cultural sensitivity. In doing so,
the evaluation manager is responsible of conducting an evaluation process with inclusion of a
broad range of stakeholders with an equal opportunity to participate in the process.

As stated in the third chapter of the Handbook, Human rights values should be fostered inside
and outside the evaluation by (i) conducting an responsiveness evaluation process, (ii)
encouraging equality, and non-discriminatory practices, (iii) being sensitive, (iv) addressing
specific issues and different concerns, (v) protecting confidentiality, (vi) having informed
consent from informants, and (vii) fulfilling obligations under law. Further, the evaluation
management should be carried out in accordance with the International Ethical Guidelines for
Evaluation (UNEG, 2008).

In addition to this, the evaluation manager should work in fostering an evaluation process that
closely works with:

• Legal codes;
• Local sensitivity;

52
The ILO Evaluation Management Handbook

• Previous experiences and lessons-learned;


• Design;
• Cultural and local sensitive questions;
• Pilot test questions and instruments;
• Observation;
• Translators and interpreters;
• Inclusive views: stakeholders’ collaboration; and
• Provision of time and information.

Furthermore, the guarantee of an evaluation that integrates ethics in the evaluation should
work on the enhancement of four main principles:

A. Development of honest, objective and fair work through:

• Staff’s declaration of any conflict of interest to clients;


• Evaluation’s design, conduction and report with respect to rights, privacy, dignity;
• Prevention of evaluating individuals rather than projects or programmes;
• Careful management of pressure by the staff and especially the evaluation manager;
• Stakeholders’ inclusion;
• Report to the investigative body when the evaluation uncovers evidence of
wrongdoing;
• Avoidance or reduction of any further harm to victims of the wrongdoing; and
• Reflect fully the main findings and conclusions highlighted by the evaluator;

B. Management of the political activity effects during the evaluation through:

• Drawing close attention to the potential conflicts before they become a political
problem.
• Avoiding bribes and subtle forms of influence.

C. Evaluation process fiscally responsible due to:

• Appropriate, prudent and well documented expenditures.


• Minimization of non-trivial costs.

D. Applying the ILO anti-fraud Policy (ILO, 2009) that includes:

• The obligation of the evaluation manager and the rest of the staff to report any case of
fraud.
• The general obligation to undertake disciplinary or similar action to overcome a case
of fraud.

53
The ILO Evaluation Management Handbook

REFERENCES

ILO References

- ILO, 1999, Circular No. 564, ILO policy on gender equality and mainstreaming,
Geneva.
http://www.ilo.org/wcmsp5/groups/public/@dgreports/@gender/documents/policy/wc
ms_114182.pdf
- ILO, 2005, A new policy and strategic framework for evaluation at the ILO,
GB.294/PFA/8/4, Geneva.
http://www.ilo.org/public/libdoc/ilo/GB/294/GB.294_PFA_8_4_engl.pdf
- ILO, 2005, Thematic Evaluation Report: gender issues in technical cooperation. GB
292/TC/1 292nd Session, Geneva.
http://www.ilo.org/public/english/standards/relm/gb/docs/gb292/pdf/tc-1.pdf
- ILO, 2008, ILO Declaration of Social Justice for a fair Globalization, Geneva.
http://www.ilo.org/wcmsp5/groups/public/---dgreports/---
cabinet/documents/genericdocument/wcms_099766.pdf
- ILO, 2008, ILO Policy on public information disclosure, IGDS N. 8 Geneva,
http://www.ilo.org/public/english/edmas/transparency/download/circular_1-igds8-
v1.pdf
- ILO, 2009, Strategic Policy Framework 2010-2015. Making Decent Work Happen,
GB. 304/PFA/2(Rev.), Geneva.
http://www.ilo.org/wcmsp5/groups/public/---ed_norm/---
relconf/documents/meetingdocument/wcms_102572.pdf
- ILO, 2010, ILO Technical Cooperation Manual- Version 1, Development
Cooperation, Geneva.
- ILO, 2011, Results-based strategies 2011–15: Evaluation strategy – Strengthening the
use of evaluations, GB.310/PFA/4/1(Rev.), Geneva.
http://www.ilo.org/wcmsp5/groups/public/---ed_norm/---
relconf/documents/meetingdocument/wcms_152025.pdf
- ILO, 2011, Applying Results-Based Management in the International Labour
Organization: A Guidebook, Geneva.
- ILO, 2012, ILO Policy Guidelines for results-based evaluation: principles, rationale,
planning and managing for evaluations. Evaluation Unit, Geneva.

ILO Checklists, Guidance and Templates

- ILO, 2012, Guidance Note 4, Integrating Gender Equality in Monitoring and


Evaluation of Projects, Geneva.
http://www.ilo.org/wcmsp5/groups/public/@ed_mas/@eval/documents/publication/w
cms_165986.pdf
- ILO, 2012, Guidance Note 7, Stakeholder participation, Geneva.
http://www.ilo.org/eval/Evaluationguidance/WCMS_165982/lang--en/index.htm
- ILO, 2012, Plan for Gender Equality 2010-2015, Geneva.

54
The ILO Evaluation Management Handbook

http://www.ilo.org/wcmsp5/groups/public/---ed_norm/---
relconf/documents/meetingdocument/wcms_174834.pdf
- ILO, 2013, Checklist 1. Writing the Terms of Reference, Geneva.
http://www.ilo.org/eval/Evaluationguidance/WCMS_165971/lang--en/index.htm
- ILO, 2013, Checklist 2. Rating the Quality of Terms of Reference, Geneva.
http://www.ilo.org/eval/Evaluationguidance/WCMS_165969/lang--en/index.htm
- ILO, 2013, Checklist 3. Writing the Inception Report, Geneva.
http://www.ilo.org/eval/Evaluationguidance/WCMS_165972/lang--en/index.htm
- ILO, 2013, Checklist 4. Validating methodologies, Geneva.
http://www.ilo.org/eval/Evaluationguidance/WCMS_166364/lang--en/index.htm
- ILO, 2013, Checklist 5. Preparing the evaluation report, Geneva
http://www.ilo.org/eval/Evaluationguidance/WCMS_165967/lang--en/index.htm
- ILO, 2013, Checklist 8. Writing the Evaluation Summary for Projects, Geneva.
http://www.ilo.org/eval/Evaluationguidance/WCMS_166361/lang--en/index.htm
- ILO, 2013, Checklist 10. Documents for the Evaluator, Geneva.
http://www.ilo.org/eval/Evaluationguidance/WCMS_208284/lang--en/index.htm
- ILO, 2013, Evaluation Title page, Geneva.
http://www.ilo.org/wcmsp5/groups/public/---ed_mas/---
eval/documents/publication/wcms_166357.pdf
- ILO, 2013, Guidance Note 8, Validity and Reliability, ILO, Geneva.
http://www.ilo.org/eval/Evaluationguidance/WCMS_165978/lang--en/index.htm
- ILO, 2013, Guidance Note 15. Management follow-up for independent project
evaluation, Geneva.
http://www.ilo.org/eval/Evaluationguidance/WCMS_165977/lang--en/index.htm
- ILO, 2013, Template: Submission of Evaluation for Approval, Geneva.
http://www.ilo.org/eval/Evaluationguidance/WCMS_206157/lang--en/index.htm

References from other Agencies

- OECD/DAC, 2002. Glossary of Key Terms in Evaluation and Results Based


Management, Paris.
http://www.oecd.org/development/peerreviewsofdacmembers/2754804.pdf
- OECD/DAC, 2010, Evaluating Development Co-operation - Summary of Key Norms
and Standards. 2nd, Paris.
http://www.oecd.org/dataoecd/12/56/41612905.pdf
- UNDP, 2009, Handbook on Planning, Monitoring and Evaluating for Development
Results, USA
http://web.undp.org/evaluation/handbook/documents/english/pme-handbook.pdf
- UNEG, 2005, Norms for Evaluation in the UN System.
http://www.uneval.org/papersandpubs/documentdetail.jsp?doc_id=21
- UNEG, 2005, Standards for Evaluation in the UN System.
http://www.uneval.org/papersandpubs/documentdetail.jsp?doc_id=22

55
The ILO Evaluation Management Handbook

- UNEG, 2008, Code of conduct for Evaluation in the UN System.


http://www.unevaluation.org/unegcodeofconduct
- UNEG, 2008, Ethical Guidelines for Evaluation, UNEG.
http://www.unevaluation.org/ethicalguidelines
- UNEG, 2011, Integrating Human Rights and Gender Equality in Evaluation towards
UNEG Guidance, UNEG.
http://www.uneval.org/papersandpubs/documentdetail.jsp?doc_id=980
- UNEG, 2012, Integrating Human Rights and Gender Equality in Evaluation -
Towards UNEG Guidance
http://www.unevaluation.org/HRGE_Guidance

56
The ILO Evaluation Management Handbook

Annex A. Quantitative and Qualitative tools for evaluation

57
The ILO Evaluation Management Handbook

ILO, 2012, ILO Policy Guidelines for results-based evaluation: principles, rationale, planning and managing for evaluations. Evaluation Unit, Geneva.

58

Vous aimerez peut-être aussi