Académique Documents
Professionnel Documents
Culture Documents
Paper No. 10
FOREWORD
Occasional Papers. Issues in Training is a series of papers for circulation among IRRI scientists interested in and working with the Training Center. The paper series is a fast and flexible means of presenting issues; presenting plans of training; providing information, results, and impact of training efforts, and providing a forum for discussion of methods, approaches, and dynamics of training events and materials.
These papers have not been edited and are works in progress. It is intended that interested readers will respond to the works directly to the authors, and will provide comments, suggestions, and professional critique.
The Standing Panel on Impact Assessment Consultative Group on International Agricultural Research (CGIAR) and The International Maize and Wheat Improvement Center (CIMMYT)
International Conference on Impacts of Agricultural Research and Development: Why Has Impact Assessment Research Not Made More of a Difference?
San Jose, Costa Rica 4-7 February, 2002
Director of the Training Center, Training Evaluation Specialist' and Director of the International Programs Management Office, respectively at the International Rice Research Institute, Los Baos, Philippines
3
Table of Contents
Abstract
.. . 3 4 4 5 5 6 7 8 8 9 9 13 13 15 17 19
..
2. Theory of Evaluation .
Generalized Model .. . .. .. . ..
Level 1: Provision of Training: Event Analysis Level 2: Completion of Training: Personal Acquisition Level 3: Resumption of Job-Related Activities: Follow-Up Level 4: Changed Organizational Performance: Impact
5. Conclusions
Bibliography
Abstract
Paul Marcotte, R. Bakker-Dhaliwal, and M.A.Bell Since 1962, over 13,000 rice professionals from all over Asia have benefited from inhouse training offered at IRRI, while another several thousand have been trained by IRRI staff in-country. Although training events have been regularly evaluated, such evaluation has been inconsistent and without a conceptual model for long-term impact assessment. Currently there is no complete institutional memory of training, making it difficult to assess long-term impact, as there exist incomplete baseline data for comparison. IRRI Training Center's (ITC) new approach to evaluation, instituted in 2000, is to establish evaluation as a proactive rather than a reactive process at the end of training. The objectives of the new evaluation structure are to: a) standardize and systemize the monitoring of IRRI training, b) generate consistent baseline evaluation data for all training, c) use evaluation results, both during and after the event, to modify courseware to meet client needs for improved delivery, d) use the baseline data to do follow-up evaluations, and e) ultimately to sequentially progress to higher levels of evaluations to assess impact at individual, institutional, and national levels. ITC's new model for evaluation comprises 4 levels, based originally on a process proposed by Donald L. Kirkpatrick 40 years ago. Although the approach is not "new," its implementation at ITC in a systematic manner is. At Level 1, the training events are evaluated to gauge participant satisfaction with respect to course content, presentations, and usefulness to the participants' work both during (for courses greater than 2 weeks) and at the completion of each event. At Level 2 the focus is on personal knowledge acquisition, skills and attitudes using pre- and post tests. Level 3 requires follow-up 3 months to a year after an event to assess the extent of application of skills/knowledge to job related activities. At Level 4 evaluations are conducted 1 to 3 years after a training event to assess organizational change as a result of skill/knowledge transfer using a baseline comparison from the previous 3 levels. Although the ITC has a new and comprehensive framework for impact evaluation, it is recognized there are institutional, human resources, and practical limitations that have prohibited its full implementation. Contributing to this general challenge of full course evaluation is the fact that the ITC is expanding its training opportunities by adding numerous multi-media technologies to its courseware, which demand a modified and in some cases a completely different approach to evaluation. This paper will focus on some of the difficulties, the successes, and finally the challenges to be addressed in fully implementing a systematic and comprehensible evaluation program. Keywords: Impact Evaluation, Training Evaluation, IRRI, and Evaluation Methodology
Since its inception, IRRI has always considered training to be one of the key mechanisms in disseminating its accumulated knowledge and technology to a broad audience. Through its degree and non-degree programs, it has developed human resource capacity in the rice research sector. Its distance learning program and materials are models for adult learning. Its on-campus and regional training have literally changed the approach and method of rice production in the world. Training has played a major part in the process historically and remains an essential technique for multiplying IRRI's impact. IRRI has defined training as the process that enables individuals to acquire knowledge, skills/tools and abilities that will allow them to fulfill the requirements of their job, achieve their career aspirations and attain the goals of the organization. Thus, training is multifunctional as it plays a vital role in individual staff development,
6
provides essential scientific information to organizations, and ultimately enhances productivity, income and livelihood. (Marcotte et al, 2000). The Strategic Plan for Delivery for Impact In order to insure the success of this process, in 2000, a small group of IRRI trainers and scientists began to review the delivery programs, needs assessments and materials for training. What became immediately clear was that planning of delivery had taken a back seat to the actual delivery. Courses had been delivered successfully for years, but materials had not been updated; new scientific findings had not been incorporated; choices for the range of courses delivered was ad hoc, dependent upon the scientist's time and interest, rather than on needs of the clients; and the evaluative process was vague and piecemeal at best. As a result of this review, a strategic plan was written (Marcotte et al, 2001). This plan incorporated all aspects of strategic planning including: Phase 1, a diagnostic phase, in which a vision was created, a stakeholders analysis was performed and an assets analysis (SWOT) was conducted; Phase 2, in which a setting of priorities was accomplished; Phase 3, in which an implementing process was established; and Phase 4, in which a maintenance and evaluative process was begun. By the fall of 2000, an Evaluation and Monitoring Unit was established and operational within the Training Center. This newly established unit was charged with the responsibility of attempting to identify and assess the impact of the IRRI training program. There were two primary activities for this unit: 1. To evaluate training events; and 2. To assess the impact of country projects. The remainder of this paper analyzes the first of these activities, the training event evaluation process.
2. Theory of Evaluation
Evaluation is a process that is neither simple nor easy. The process must start with the establishment of a baseline, assessment of skills, knowledge and delivery, followup and then determination of impact. Thus a simple question such as "Why has impact assessment research not made more of a difference?" is not easy to answer, and in many cases may be impossible to answer. Based on the Strategic Plan "Delivery for Impact: The Dissemination of Rice-Related Knowledge and Technology" (Marcotte et al, 2001), the IRRI Training Center established a process so that, ultimately, impact could be assessed. Without the entirety of this process, the essential question of the impact of research and dissemination cannot be answered. While the approach itself may not be novel (in accordance with the guidelines of this conference), the accomplishment of the entirety of the process necessary to assess the impact appears to be novel in the history of the CG.
Generalized Model
IRRI has adopted a generalized model for training event design and delivery. This model is composed of 8 essential elements beginning with need's assessment and progressing through the setting of an agenda, construction of modules, testing and validating, redesign, material's production and delivery. The process is interactive with evaluation validating individual elements and circumscribing the entire process. Figure 1: The Design and Delivery of a Training Event
Evaluation
Delivery
Material Production
Module Construction
Redesign/ Modification
Development Sequence
Validation/Evaluation Feedback
Evaluation is the final step in the process and confirms (or denies) the approach. Unfortunately, all too often, the following quote, while relatively old, is still accurate: "Millions for training, but not one cent for evaluation. By design or happenstance, this is an all-too-common occurrence. In many instances it is assumed that training programs have been effective because participants enjoyed the presentations" (Cascio and Awad, 1981: 307).
Given this understanding and approach, the IRRI training Center established a continuous evaluation of our program. The conceptual model and explanation of the various levels follow: Figure 2: Evaluation Type/Sequence
Level 1 Training Event Evaluation Assess Satisfaction of Trainees Event Assessment Questionnaire Completion of Training
Level 2 Skills/Knowledge Acquisition Assess Change in Knowledge, Skills, Attitudes Pre-Test/Post Test Completion of Training
Level 3 Skills/Knowledge Transfer Assess Extent of Application of Skills/ Knowledge to Job Related Activities Survey: Interview and/or Questionnaire 3 months to 6 months
Level 4 Organizational Performance Change Assess Organizational Change as a Result of Skills/Knowledge Transfer And Incorporation Baseline Comparison 1- 3 years
Level 1: Provision of Training: Event Analysis The evaluation of Level 1 focuses on the training event itself as it assesses the satisfaction of trainees with the trainers, the facilities, and the content of the program. These are 'reaction' criteria that measure the trainee's impressions and/or feelings about the program. This is commonly accomplished with an evaluation form administered at the end of the event, or at properly sequenced intervals during an extended training event.
While IRRI has used a variety of formats for this level in the past, the Training Center now has a standard data collection format so that the information collected on events is comprehensive and comparable across events. This data collection is a necessary step in understanding the learning and success of an event, and more importantly is essential to the training program evaluation. The recommended reporting format includes the following 4 parts: Part 1: Report of Participant's Event Evaluation Form a. Review of the workshop on general reactions such as overall rating, meeting of objectives, and strengths and weaknesses; b. Assessment of topic's content, presentations, usefulness, and time allotment c. Features d. Additional topics e. General comments Part 2: Summary of Participants Evaluation Part 3: Conclusions/Recommendations Part 4: Annexes
10
11
10. Cross-cultural Awareness and Sensitivity Workshops 11. Valuing and Promoting Cagayan Indigenous Rice Varieties (Agents) in Cagayan Valley (Extension Agents) 12. Valuing and Promoting Cagayan Indigenous Rice Varieties (Farmers) in Cagayan Valley (farmers) 13. Farm Walk (farmer-led lloilo-Farm demo. program) Walk Totals and Averages *5=Excellent, 4=Good, 3=Average, 2=Fair, 1=Poor
2 days 2 days
59 29 308
12
However, evidence of participant satisfaction alone does not demonstrate the efficacy or value of a training event. Nor does it aid in improving the courseware for improved delivery and increased impact in the future. In an effort to increase the usefulness of the data collected, the TC Ll evaluations also collect data on the process (i.e. how were the presentations, were objectives met, how was the organization and management of the course, etc.) and on the specific content and usefulness to work as delivered by topic. For courses from 2-4 weeks, this content specific evaluation is administered weekly and for courses between 4-8 weeks the evaluation is administered bi-weekly. This is also partly a function of where the course is delivered, i.e. at IRRI Training Center or regionally. The results of this more rigorous approach are summarized in Table 2 and Figure 4.
13
Table 2 Course ratings for Content, Usefulness to work, and presentation for courses delivered in 2001.
Events Content [Weighted Ave.] Usefulness [Weighted Ave.] Average Course Rating* Presentations Content Usefulness [Weighted [Avg. Std. [Avg. Std. Ave.] Dev.] Dev.] Presentations [Avg. Std. Dev.] Overall average
Content INM RP 1 Hybrid Rice RP 11 Seed Health Methodology MAS for NRM TOT Support PowerPoint I PowerPoint II Cross-cultural Workshops Cagayan (Extension) Iloilo-farm walk Cagayan (Farmers) Totals and Average
4.24 4.28 4.16 4.18 4.12 4.20 4.40 4.33 4.27 3.53 4.22 3.89 3.94 4.16
4.02 3.94 3.93 3.70 3.59 4.16 4.16 4.10 3.99 3.44 3.80 3.93 3.38 3.86
4.19 4.16 3.98 3.97 4.04 4.10 4.35 4.31 4.16 3.39 3.96 3.89 3.87 4.06
0.61 1.04 0.70 0.69 0.41 0.60 0.64 0.67 0.72 0.73 0.68 0.79 0.84 0.68
0.77 1.01 0.74 0.90 0.40 0.60 0.80 0.78 0.82 0.68 0.80 0.5 0.75 0.74
0.66 1.03 0.68 0.74 0.34 0.56 0.63 0.64 0.73 0.72 0.75 0.79 0.87 0.68
4.15 4.13 4.02 3.95 3.92 4.15 4.30 4.25 4.14 3.45 4.00 3.90 3.73 4.03
The summaries illustrated in Table 2 and Figure 4 indicate that although the participants are generally quite satisfied and interested with the topics content and presentations, they're more likely to question the usefulness (to their work) of the information delivered. The lower scores for usefulness make the TC reflect on questions such as: do we, the deliverers of information, truly understand the jobs and needs of the participants? Are we not clearly emphasizing the usefulness of the information we're delivering? How can we re-sequence the course topics so that the usefulness is more apparent? Are our training needs assessments really identifying participant needs? Is IRRI or our NARES partners targeting and sending the wrong individuals for that particular course? Although these are broad questions that will need to be addressed at an institutional level, practical results that aid in restructuring a course are also acquired during the weekly evaluations. Uncertain results are often clarified or reconfirmed in the open-ended sections of the evaluation form or with focused discussions at the end of a training event.
14
Figure 4: Course ratings for Content, Usefulness to work, and presentation for courses delivered in 2001.
15
Table 3: Year 2000 courses that underwent Level 3 (follow-up) assessments and their respective number of Thai participants
Course Name Genetic Evaluation and Utilization Multi-Agent System for NRM R ice Seed Health Use of IT in Reaching Farmers Genetic and Environment Total # Thai participant 1 4 1 1 1 8
In response to a general question regarding the overall course usefulness in retrospect, all the interviewees rated the courses as good (4.0), on a scale of 1 to 5, where 1 is poor, 2 is fair, 3 is average, 4 is good, and 5 is excellent. However, when asked more specific questions related to content/topic usefulness and design/delivery of the course, the results were more variable as presented in Figure 5. The interviewees were able to be fairly specific in regards to which topics were useful in their current work and those that weren't, once they were presented with the course syllabus. Of the 8 interviewees, 5 indicated that they had not yet used the knowledge and skills gained from their trainings in their current work, although all had future expectations for its use. Some reasons given by the interviewees for lack of technology application to their current work were as follows: 1) it was still too soon after the training, 2) the person was still a new researcher, and 3) other job responsibilities interfere with the application of the new technology. Of the 3 interviewees who were already utilizing the new technologies, all had either academic or job related responsibilities that could easily and directly incorporate the newly acquired information.
16
In terms of qualitative data, some interesting comments were related to issues that the Training Center frequently encounters and struggles with such as a) gender imbalance within a training event, b) domination of one national group within a regional course i.e. when 50% or more of the participants represent a single country, and c) English language capabilities. Many suggested that IRRI Training Center change its admission policy regarding trainees. One interviewee summed up the general opinion as follows: "Many people (especially field technicians) in my workplace would benefit greatly from IRRI's training but are ineligible to apply because of their formal education level. Since field technicians tend to do the actual work in the field/lab/etc., many of IRRI's trainings would be of the most practical use to them. The training will also provide them with an opportunity to move up professionally; field technician position is usually very stagnant." The Thai follow-up experience was useful for planning a framework for future level 3 assessments. Although in-person follow-up is more expensive than other methods such as mail, phone, or email, it may be more appropriate to the diverse Asian environment within which the Training Center functions. During the interviews it was noted that the response and accuracy rate was higher as questions were easily clarified and that many of our interviewees found it difficult to express their thoughts in writing. In addition, it was observed by the Thai translator that the interviewees would give more elaborate answers when verbally asked the questionnaire questions, phrased in English, whereas they would leave many of the open ended questions unanswered on the form even when encouraged to write in Thai. Delivery and Follow-Up: Nepal Case In addition to the implementation of the evaluative process to the training courses, the same process is also being extended to technology transfer activities and workshops in the field. In the Spring of 2001, a small team of IRRI scientists, composed of JK Ladha (Crops, Soil and Water Systems Division), V Balasubramanian (CREMNET) and Paul Marcotte (Head, Training Center), conducted a workshop for researchers and extension agents of the Nepal Agricultural Research Council (NARC). The content of the workshop was nutrient management, specifically the Leaf Color Chart (LCC). The LCC is a simple tool/technology for determining nutrient needs for rice plants. While simple in application, the LCC is based on sophisticated and complex science, and it has been field-tested extensively in Indonesia, Vietnam and the Philippines. Basically, the concept is that the color of the leaves of the rice plant will indicate the need for nutritional inputs required for a healthy stand and maximum yield. Research on the various countries has indicated that inputs and their costs will be minimized, and yields will be maximized as the nutrient uptake will be most efficient if applied at the correct time, rather than at prescribed time intervals. In other words, the inputs will be based on readings indicating needs in the actual field rather than laboratory or generalized recommendations.
17
There were 12 researchers and extension agents that attended the mini-workshop held at the NARC facilities on the outskirts of Kathmandu. The workshop took 1/2day, and included lectures, discussions, and hands-on practice. The researchers were taught to use the LCC and design a field-based research experiment. Each was supplied with a package of materials, including the LCC, instructions for use and a guide for the design and collection of base-line information. Several months later, the team re-visited Nepal. The purpose of the follow-up visit was to review the sites selected for field experiments. The first of these was a site designed with local farmers by Raj Schresta, a NARC researcher. This site was in the foothills to the north of Kathmandu near Nagarkot. Experiments were arranged to compare traditional farmer practice of nutrient inputs to standard recommendations, to LCC, and to a control field on the terraces. Once the experiments were designed, the analysis and input decisions were up to the farmers who had been trained in the LCC approach. Preliminary visual inspection of the fields indicated that the LCC fields were outperforming all others. The plants were healthier, the canopy was fuller and the prediction was that the LCC field would out-yield the others by 20%. The farmers instructed Mrs. Kamba, Chair of the IRRI Board of Trustees, in the LCC methodology (see left photo below). The team proceeded to one of the participating farmer's homes for a discussion about the experiments.
18
A second trip within Nepal was made to Parwanipur in the Terai along the Indian border. Local researcher and IRRI PhD scholar Regmi had arranged for several experiments to be conducted in the area. The results were the same as above. All the LCC experiments outperformed traditional practice at both the experiment station and in the farmer's fields. Also as above, the team met with local farmers to discuss the outcomes of the trials and experiments. This LCC technology transfer workshop and follow-up case in point show direct potential impact on farmers' livelihood. However, experiences of this type need to be more systematically recorded for future level 4 assessments.
5.
Conclusions
There are a number of conclusions that can be made on the evaluative process that has been established. Some are specific to the process and some are details on the lessons that have been learned about specific training interventions. The Process: With respect to Levels 1 and 2, these are time consuming and the staff must be dedicated and vigilant. Trainers must be briefed on the need and heuristic value of the documents, and the summaries must be prepared in a short time frame for immediate use. When designed and administered appropriately, Level 1 evaluations are the best evaluation method of the four levels with respect to cost and efficiency. Because Ll evaluations depend on participant responses, data collection is quick (efficient) and inexpensive. In addition, the information collected is useful and the immediate return on investment (ROI) can be significant. With respect to Level 2, it must be understood that pre- and post-tests are not appropriate for all courses. When appropriate, these are powerful measures of skills and information acquisition directly related to training. With respect to Level 3, it is a very difficult and costly proposition that requires a lot of cooperation. This is especially true with the interview format when the monitoring and evaluation staff is required to visit multiple trainees in a number of countries. However, Level 3 provides essential information on the use value of the learning received in the training events. Incorporating on-site supervisors in the process streamlines the information flow, and creates vested interest in the training received. Follow-up (Level 3) interviews are useful instruments in gauging the application of a new technology to an individuals work and therefore impact assessment on
19
a personal and organizational level. But due to their associated expense, they need to be carefully targeted for specific countries or groups of people as a function of IRRI's relationship priorities with the NARES. With respect to Level 4, there is a need for clear expected results so that there can be unequivocal results and attribution. Unfortunately, often it is not possible to show that personal or organizational improvement can be attributed to the training. However, if the skills learned are not being used then the training may be considered a failure. This may be the ultimate danger of conducting Impact evaluations, i.e. that it may be found that despite the positive results and apparent success of the training, the skills and information may not be in use or effective.
Specific Training Events: Follow-up in the Nepal case indicated that extension agents understood the workshop content, were able to replicate the workshop, and that local farmers were able to understand and use the technology successfully. A simple tool like the LCC, based on correct science and field-tested for performance, was adopted by farmers and became part of their analytic systems approach to production. In this case, farmers saved money on inputs, and produced higher yields for consumption and the market. Thus money was saved, and money and food were increased. With respect to the Thailand follow-up$ there were indications that selection and targeting of participants may not have been the most appropriate for the courses. This is a constant concern. Often, participants are selected on the basis of reward rather than need. One effort to remedy this is the incorporation of In-Country Liaison Scientist's recommendations on the participant.
Although we have yet to complete the whole evaluation process, from Level 1 to Level 4, in its entirety, we have started to discriminate between what is feasible, what is useful, and how to enable the process successfully, yet obtain the necessary baseline data for long-term impact analyses.
20
6. Bibliography
Cascio, W.F. and E.M. Awad, 1981. Human Resource Management. Virginia: Reston Publishing Company, Inc. Reston,
Kirkpatrick, D.L. 1998. Evaluating training programs: the four levels. San Francisco, California:Berett-Koehler Publishers. 2nd Ed. Marcotte, P.L., 2000. Guidelines for the Dissemination and Impact of IRRI Information (GNI-05). IRRI, Los Baos, Philippines. Marcotte, P.L., M. Bell, M. Quiamco, G. Castillo, and S. Morin, 2001. Delivery for Impact: The Dissemination of Rice-Related Knowledge and Technology. IRRI, Los Baos, Philippines.
21
OCCASIONAL PAPERS: ISSUES IN TRAINING Paper No. 1 Report on the Think Tank Meeting on the Use of ICT to Support IRRI's Training Program R.T. Raab (April 1999) Paper No. 2 Web-based Technology: Creating Access to Rice Science or Widening the Digital Divide? P.L. Marcotte, M.B. Quiamco, and L. Norman (October 2000) Research for Development: IRRI's Strategy for Enhancing Research Relevance and Application S. Morin, P.L. Marcotte, M.A. Bell, V.Balasubramanian, and F. Palis (October 2000) Report of IT Sessions at the Expert Consultation on Training compiled by D Shires (February 2001) The Training Center Contribution to the Strategic Marketing of IRRI D. Shires (March 2001) A Strategy and Implementation Plan for the Use of New Approaches and Technologies in Training D. Shires (April 2001) IT Applications in Training and Delivery: The IRRI Experience M. B. Quiamco (May 2001) IRRI's Computer Based Information Delivery System in Training Agricultural Researchers R. Bakker-Dhaliwal, P.L. Marcotte, S. Morin, M.Bell, and P.Comia (July 2001) The Missing Last Mile in the Delivery of Knowledge to the Agricultural Sector T. George, S. Morin, and J. Quiton (December 2001) Assessing Training Impact: IRRI's New Evaluation Approach P Marcotte, R Bakker-Dhaliwal, and M Bell (January 2002)
Paper No. 3
Paper No. 4
Paper No. 5
Paper No. 6
Paper No. 7
Paper No. 8
Paper No. 9
Paper No. 10
22