Vous êtes sur la page 1sur 66

Coursework Header Sheet

158942-19

Course INDU1116: Dissertation (MAIHRM) Course School/Level BU/PG


Coursework MAIHRM Dissertation - January Submission Assessment Weight 100.00%
Tutor DA Hughes Submission Deadline 29/01/2010

Submission date for MAIHRM students who were passed to the dissertation stage in SEPT 2009

Coursework is receipted on the understanding that it is the student's own work and that it has not,
in whole or part, been presented elsewhere for assessment. Where material has been used from
other sources it has been properly acknowledged in accordance with the University's Regulations
regarding Cheating and Plagiarism.

000529696
Tutor's comments

Grade Final
For Office Use Only__________
Awarded___________ Grade_________

Moderation required: Date


Tutor______________________
yes/no _______________
Title Page

1
A study on measurement methods of training program
evaluation in Indian BPO Industry: A Case study of
IBM-Daksh
Title
‘EVALUATION OF TRAINING ‘

SUBMITTED TO: NIELS WERGIN

THE UNIVERSITY OF GREENWICH

SUMITED BY: YIRMEICHON KEISHING

MA IHRM

29 JANUARY 2010

2
ACKNOWLEDGEMENT

I would like to express my thanks to:

My parent for their patience and support in all means.

My supervisor for his guidance and support.

Last but not the least the almighty God.

3
TABLE OF CONTENTS
COURSE WORK HEADERSHEET

TITLE

ACKNOWLEDGEMENT

ABSTRACT

INTRODUCTION

LITERATURE REVIEW

METHODOLOGY

FINDING

4
ABSTRACT

The aim of this research was firstly examine the existing theories of
evaluation of training programmes on the whole and secondly
explore a case study (IBM-Daksh) on the relevance of an
extensively established academic model (Kirkpatrick Model) to
evaluate training programs in Indian BPO Industry. The research
has achieved following research objectives: to assess the need of
training program evaluation; to identify and evaluate various
measurement methods of training program evaluation; and to
assess the effectiveness of Kirkpatrick Model for methodical
evaluation of training. The research attempted to get dome
following research questions: why measurement method of training
program evaluation is imperative and how effective is Kirkpatrick
Model for methodical evaluation of training; why is the need of
training program evaluation; what are various measurement
methods of training program evaluation; and how effective is
Kirkpatrick Model for methodical evaluation of training.

Training evaluation is needed for IBM-Daksh for the principal


purposes of training results and implementation control. Certainly
training evaluation of IBM-Daksh should be centrally focused
towards measuring changes in knowledge and appropriate
knowledge transfer. Training evaluation of IBM-Daksh should be
measured through the criteria of qualitative performance and not
through the quantitative performance. Lack of accountability is the
major challenge of training evaluation for IBM-Daksh. Certainly
training evaluation of IBM-Daksh should be methodological
approach of measuring learning outcomes. Changes in learners and
organizational payoff as dimensions of training evaluation target

5
area needs to be centrally focused by IBM-Daksh. Kirkpatrick Model
of training evaluation is highly effective for IBM-Daksh. Learning and
results as dimensions of Kirkpatrick model training evaluation
require to be most focused by IBM-Daksh in order to get the desired
results in relation to training evaluation. Certainly Kirkpatrick model
training evaluation is highly effective in evaluating e-learning of
IBM-Daksh. Definitely Kirkpatrick model training evaluation is cost
effective and efficient in controlling staff turnover in IBM-Daksh.

6
Chapter 1
INTRODUCTION

1.1 BACKGROUND AND JUSTIFICATION

Training is the most essential and inseparable part of human


resource development in any organization in India, from last ten
years we feel that there is a continuous need for change in the
technique, methods and strategy of a growing business world
(Lynton and Pareek, 2000). Through the advancement of technology
there is a rapid increase in the working of a worldwide competition
in several areas. The traditional approach has changed dynamically
as per the change in the liberal approach for organizational training.
For example, in the life science discipline, these drivers are dogged
by the speedy technological growth in the field of information
technology (Priest, 2001). The emerging states are having an
extensive view on the openings of new areas and getting holds over
the application of new technology used to acquire skills and
concepts. However the requirement for the human resources
development is the most vital part of any organization (Phillips,
2003). This training is also important to see whether the duties of
employees are properly discharged so that to fulfill the goals of
organisations within the stipulated period of time. It is also
important to check whether each training programme is monitored
properly for timely evaluation on work performance.

Evaluation of training depend on the question that whether the


training is effective with the application of new technology.
Evaluation of training helps the candidates to find out the
problematic areas in their learning process. Training has certain
objectives that helps in the evaluation and determination of the
technique used for quality oriented growth in an organization.
Training helps in increasing the valuable relations among its

7
employees and evaluating the performance of interns who are going
under training process for the company. Many companies do not
apply the pattern of training at work, particularly in areas where the
trainers and personnel department do not have sufficient time or
resources to implement so. The training techinique should must be
improved and provided for the evaluation of available resources and
to compete with market rivals .Lack of assessment decreases the
efficiency in the working of an organization on the other hand, this
also hampers the quality of production. Evaluation of training
depends on numbers of issues however, there is needed to be
realistic targets. The appraisal needs more elaborated information,
where there is a need for huge investment. Management training, in
general should be made clear to all for fulfilling expectations for
everyone. Furthermore, the training process helps in the planning to
review the capability of employees’ potential on a certain level of
work. Extensive management training should be made friendlier to
caution and ensure the requirement of employees and trainer at the
time actual working within the organization. While the company
needs a regular check-up about the performance of interns at the
time of work.

For making the training process more effective, it is important that


the following considerations must be kept in mind for achievement
of organizational goal. The training process must be done in a
certain time frame with the objectives, to fulfill the vacant post. The
trainer must review the time period that after the participants have
learned the training when they are expected to return to work. After
coming back to the actual work field there should be no confusion
left with then for the execution of work. The nature and scope of
training depends on the effective return on investment of the
company in regards to the satisfactory achievement of
organizational goal.

8
The evolution of performance in almost every organization is a
regular phenomenon in evaluating the performance of employees.
Chen and Rossi (2005) explain that the evaluation of literature
depends more on the practices. The example says that most of the
training is taken from the Kirkpatrick model. But at present is based
on the market demand. The information for evaluation at this level
is usually collected through the methods of questionnaire at the end
of the training program.

The effectiveness of a training programe depends heavily on the


positive effect employed in the given task. The training session
helps both the staff and members of the group so that they can
raise their working capability. But the demanding aspects are
increasing sharply in current scenario. The main objective of a
training program is to make a hierarchy among the various ranks in
the organization for the execution of work. The evaluation of the
training program must show an improvement in the working
efficiency of worker at various level and side-by-side increase in the
financial outcomes of the company. Since the learning process is not
traceable in short time but the top managers should must be
efficient in serving the purpose of the organizational goal, as the
accounting demand are increasing day by day for the growing
competition in the market in recent years. Financial difficulties are
big hurdles in front of emerging companies for the operation of their
work. Giving vocational training is believed as an important aspect
in the process of development. Evaluation of training is the most
essential part of calculating the performance of employee. There is
a vast gap on the thinking of actual performance desired and
performed. This disparagement is said due to irregularity in the
methods carried out by the training institutes for evaluation of
employees.

9
The function of research is to identify the areas of improvement
within the organizations in Indian BPO industry. This refers to the
case study of, a leading BPO organization like IBM-Daksh in India.
Evaluation helps in calculating the degree accessibility to which
methods and agendas of the company is gained. Evaluation is said
to a course of action where the individual at work learn, gain
experience (Torres, Preskill and Piontek, 1996). However in this
form of study the evaluation is done to find out the cause and its
consequence applied to carry out work. One key model used in this
study is Kirkpatrick. This model was elaborated in the year 1952,
and remains unique till date (Stufflebeam, 2001). The model is
made up on the four bases to to judge the behavior, learning,
results and reaction used in the process of training (Kirkpatrick,
1996).

Bring the first model to evaluate the performance of employees at


training. This Kirkpatrick model set an ideal parameters on which
the measurement target to check the capability of employees. Even
now days, this model is the most popular in every organization.
(Anita et al, 2006). The today we use the modified versions of the
Kirkpatrick framework which comes in four levels (Stoel, 2004).
There are several vulnerabilities associated with Kirkpatrick model,
including the result, actions of the participants, less connection
between the responses and the performance of employees at work
in various levels.

1.2 AIMS AND OBJECTIVES

The aim of this research is firstly examining the existing theories of


evaluation of training programmes on the whole and secondly explores a
case study (IBM-Daksh) on the relevance of a extensively established
academic model (Kirkpatrick Model) to evaluate training programs in
Indian BPO Industry. The research attempts to realize following
objectives:

10
To assess the need of training program evaluation

To identify and evaluate various measurement methods of training


program evaluation

To assess the effectiveness of Kirkpatrick Model for methodical


evaluation of training

1.3 RESEARCH QUESTION

• Why measurement method of training program evaluation is


imperative and how effective is Kirkpatrick Model for methodical
evaluation of training?

11
Chapter 2
LITERATURE REVIEW

2.1 INTRODUCTION

Training is the most vital function of an organization. It helps the


worker to be aware with the tools and technology used in the
process of production, not only that it also helps the organization to
set a hierarchy of standards among its employees to set targets for
market growth. It also helps in evaluating the performance of an
employee within the organization. Evaluation is not just about
measuring reactions to the training of participants who had given a
positive reaction to perform according to the new skills, technology
and knowledge. Therefore the aim of evaluation must be to measure
the change in knowledge and working capability of employees at
workplace. There are number of models used for training in the
literature but the most famous model of training evaluation is
explained by Professor Donald L Kirkpatrick (1975). This section of
the study details about the theoretical framework has been done
with the help of already published materials like books journals and
online sources. Keeping it in mind the above aims and objectives the
following points have been covered as (i) Training Evaluation:
Definition and Conceptualisation; (ii) Training Evaluation Models and
Effectiveness; (iii) Kirkpatrick Model of Training Evaluation,

2.2 TRAINING EVALUATION: DEFINITION AND


CONCEPTUALISATION

In today’s world training evaluation is a much debatable topic in


literature, as said by Burrow and Berardinelli (2003) who state that
every little receiving on the training part are given much attention
as evaluation. Organisations accept the value of training as

12
important investments of time and money for success of any
organization. This logic is also accepted by Lingham et al. (2006)
who believes that training helps in simplifying relations on the work
front. Training validation is different to that of training evaluation.
Validation may be defined as a process of checking people at work
for completion of specific work. Evaluation is much broader sence
defined in the year 1994 in Industrial Society Report.

Evaluation may be defined as the outcome of operation analysed in


an organization. This is clarify explained by (Rae, 1999) as a process
of finding information regarding the effectiveness of organization.
Evaluation is further divided into two parts: Macro and Micro. This
evaluation helps the organization in achieving the objectives and
target set. However, macro evaluation is said to be the standards
used within the desired time at the low cost. Micro evaluation is
more complex subject for bringing skill changes and improvements
within the organization. The literature explains the norms on which
guidelines of training are set for clarifying doubts on any training
evaluation. This view is also support by (Marchington and Wilkinson,
2000). The literature raises number of question to measure to
conduct the evaluation of training. (Easterby-Smith and Mackness,
1992) stressed on four purposes for evaluation: (1) The organization
must have certain obligation to perform in the course of certain
outcomes and consequences. (2) The organisation must have
certain standard on which the quality of work may be improved by
using data gathered from the evaluation. (3) The purpose of training
help participants to recognize the true value of task they are
assigned for. Easterby-Smith and Mackness stressed on the
purposes and importance of training cycles on different
stakeholders.

The function of evaluation is to determine the change in knowledge


in regards to the training provided at work Mann and Robertson

13
(1996). Evaluation is not only the process of measuring reactions to
the training; but it is the method which help participants in giving
better results with the new skills or knowledge so the purpose of
training is to calculate the rate of change found by the use of
knowledge and learning at workplace. Numerous models are given
by researchers for training evaluation in the literature Abernathy
(1999) states but the most famous model of training evaluation is
given in the literature by the American Professor Donald L
Kirkpatrick (1975) which was initially created in 1959. This method
is certified by Nickols (2005) who said that “Current method of
evaluating training is derived by the Kirkpatrick Model”. This is
further affirmed by Canning (1996 p. 5) who said this model of
training is “Rather like a reptile, it brings incremental changes in the
process of training.

In addition to this, Bramley (1999), a noted writer of evaluation


theory, explains how the Kirkpatrick model focuses on four levels of
training to measure the real growth of the organization like as
learning principles, of facts, skills attitudes and behavior involved
in the process of training. Bramley states that most organizations
carry out functions of evaluation at the initial level for measuring
the capability of workforce in gaining technical skills. In addition he
further argues on few changes in the organization to measure
changes in the behavior of employee. After initial training
organisations feel it more convenient in evaluating the performance
of its employees during the end of the day or activity - rather than
regular follow-up action.

Although the Kirkpatrick model was firstly evolved in the year 1959,
after that number of practices is done in the form of functional
training to evaluation the capacity of various academics and
practitioners. Many of the training evaluation models were in use
since Kirkpatrick have evolved a new way of thinking and bringing

14
variations in the role model. The Kirkpatrick model is supposed as
the base of training and evaluation in literature. This method is also
supported by Wang and Wang (2005 p. 22) who said “No matter
how controversial it may appear, the four-level evaluation proposed
by Kirkpatrick began a chapter of measurement and evaluation in
the field of human resource development.” The main element for an
effective training model is highlighted by Tamkin (2005). Tamkin
explains that a number of things should be taken in account while
scrutinizing the process of training evaluation models. The first thing
must be kept in mind so that the other factors must be identified
and properly utilized in the process of training. Then we must
determine whether the model adds value to the process of
production so that its worker understands the contribution needed
by various aspects of Human Resource practice. This also helps the
organization to understand the value of employees and to short out
the differences coming in the process of work. It could be also
argued that the literature must examine all the relative information
gathered to understand the organizational setup in terms of
production at lower cost, the literature to effective training must
work in order to bring all the hidden aspects to solve challenges
occurring at work place. Management should must be persuaded to
invest money, staffing in regard to the time for operation to increase
the level of business value. If there is nobody within the
organization who can conduct evaluations then it is very difficult for
organization to set up target for operation and remain in the
business. Lack of senior manager is one more drawback especially if
there is no training supervisor in the decision making process.
Absence of strategic planning and effective direction in the way of
increasing performance might hamper the working at every level.

Lack of accountability is treated as a big barrier for effective


evaluation. This was focused by Rae (1999) that effective evaluation
needs a “training quintet” to make senior management, aware with
the new technique used for progress of a company. Rae also

15
stressed on the view that senior management should must be
authorized to take an active part in the organization for improving
results so that to make a congenial atmosphere in the organization
for proper functioning. If the organization doesn’t have a good
culture to encourage its employee then the evaluation part will
create a problem in the organization to find out information
regarding drawback in the system. Culture may be termed as an
obstacle to increase the performance of employees (Holton, 1996;
Holton et al., 2000). The State government of Louisiana founds that
culture have an adverse impact over the performance based
training. Reinhardt (2001) focuses that in her research on
identifying barriers to measure the performance of employee at
work in learning. Reinhart also highlights the loopholes in the
organizational set-up as a important aspect to measure the impact
on performance. This issue of culture is termed as an imperative
barrier.

2.3 TRAINING EVALUATION MODELS AND EFFECTIVENESS

Recently from many contributions in the field of training literature,


training effectiveness and evaluation have acknowledged significant
attention (Holton, 2003; Holton and Baldwin, 2000; Kraiger, 2002).
One of them is the development of Kirkpatrick’s recent four-layered
assessment technique (Holton, 1996; Kraiger, 2002) and board
theoretical technique of training effectiveness (Holton, 1996;
Tannenbaum et al., 1993) has come forward as a vital works.
Although, one of these evaluation methods was developed 10 years
ago, post-training behaviour, has not been incorporated into these
evaluation methods. Additionally, no update has been done for
many years, of the variables believed to part a good role in the
training effectiveness. So, the objective of this article is to analyse
the research work on training effectiveness and evaluation of a
decade and to summarize the findings as a model for training

16
effectiveness and evaluation (IMTEE). Sometimes training
effectiveness and training evaluation are used interchangeably; but,
these are two different terms. A real illustration may be helpful to
explain these differences. In recent times, a government agency for
employment was instructed by a court to revamp and oversee
selection assessment for about 30 jobs. It was time bound for that
staff needed to work for several months included many hours
overtime in a row. Due to that in a year into that project many
employees left the agency and many were falling sick time after
time. To stop employee turnover further and to provide them
support to complete remain project on time, the agency initiated a
training program for the employees for dealing with exhaustion
problem. All the employees included supervisors and their
subordinates attended the training. The multi purposes training
program was specially prepared, which included humor, lecture, and
real time practice of many stress-reducing techniques. This training
program also included trainees to improve worker and supervisor
relation even after training. In the end, the superiors were pushed to
share their self developed stress-reduction techniques and methods
with their juniors/ subordinates.

Measurement of learning outcomes through a practical method is


called training evaluation. On the other hand a theoretical method
to measure training outcomes is called training effectiveness.
Training evaluation provides micro-view of a training program as it
focuses only on learning outcomes (Torres and Preskill, 2001). On
the contrary, training effectiveness highlights the whole learning
system and provides a macro-view of the training outcomes.
Training evaluation try to find out the benefits of a training program
to individuals as learning and increased job performance. Training
Effectiveness try to find out the benefits for the organization by
deciding why people learn or do not learn.

17
Evaluation of a training program is the assessment of its
success/failure in of views its design and content, changes in
organizational, and learners’ productivity. The training evaluations
methods used to review depends on the evaluation model, as there
are four models for evaluation. First of them, Kirkpatrick’s behavior,
learning, reactions, and results typology, is the easiest and
frequently used technique for reviewing and understanding training
evaluation. In this four dimensional method, learning outcome is
measured at the time of training which mean behavioral attitudinal,
and cognitive learning. Behavioral learning measures on-the-job
performance after the training. The second model expanded by
Tannenbaum et al. (1993), on Kirkpatrick’s four dimensional
typology, he included two more aspects, post training attitude and
further dividing behavior into two training outcome for evaluation;
transfer performance and training performance. In this extended
model, training reactions and post-training attitudes are not
associated with any other evaluation. No doubt that learning is
associated with training program performance, and training
program performance is associated with training transfer
performance, and transfer performance is associated with training
outcomes.

The other, third evaluation strategy, Holton (1996) included three


more evaluation objects: transfer, learning, and results. Holton
doesn’t consider reactions because reactions are not a primary
effect of training program; to a certain extent, reactions are the
moderating or mediating variable between trainees’ actual learning
and motivation for learning. This model relates learning to transfer
and transfer to results. Additionally, Holton has different opinion for
the combination of effectiveness and evaluation. Because of that in
his model specific effectiveness variables are highlighted as
important aspects for assessments at the time of evaluation of
training outcomes. The last and fourth evaluation method was
developed by Kraiger (2002). This model stress on three

18
multidimensional areas for evaluation: changes in trainees (i.e.,
cognitive, behavioral, and affective) training design and system (i.e.,
design, validity, and delivery of training), and organizational
outcome (i.e., results, job performance, and transfer climate).
Feedback from the trainees is considered an assessment technique
for measuring how effective a training program design and system
were for the learner. Kraiger stated, feedback measures are not
associated with the changes in trainees/employees or organizational
outcomes, but those changes or leanings in employees are
associated with organizational outcomes.

The study of the training organizational, and individual,


characteristics are called training effectiveness that affects the
learning route before, during, and after the training. Need analysis
for training is recognized as an important input to training
effectiveness (Salas and Cannon-Bowers, 2001). A Full explanation
is beyond the study area of this article, a detailed training needs
analysis consider the personal differences of trainees, the
organizational objectives and culture, and the various features of
the task(s). Conclusion form this analysis is used to decide both the
training content and the method. Thus training wouldn’t be effective
until it fulfills the organizational, individual, and task needs,
identified through the need analysis. Relationship of these also
answers in the change (increases or decreases) in the transfer and
learning performance. Holton and Baldwin’s (2000) extended this
model as by training effectiveness model, it clearly identifies
specific characteristics affecting transfer and learning outcome.
These characteristics consist of motivation, ability, individual
differences, prior experience with transfer system, learner and
organizational involvement (e.g., supports, preparation), and
training content and plan.

Training effectiveness model (Holton’s, 1996) also has particular


training, organizational, and trainee characteristics as primary or

19
secondary variables that affect the training outcomes. Holton’s
model put forward that the all these characteristics are related to
transfer and learning performance. However, indirect relationships
are also there because of the interactions between these
characteristics. For an example, Holton recommended that
motivation interacts with organizational and training characteristics,
in this way it influencing the training outcomes. However, Holton
has given valuable inputs for assessing training effectiveness, a few
studies (Holton, 2003; Holton, Bates, and Ruona, 2000) have
measured the various outcomes recommended by the author. All
these authors have developed a Learning Transfer System with
effectiveness variables summarize in a model and found it
supportive for the model’s construction.

Tannenbaum et al. (1993) has suggested four types of in his training


effectiveness model, Holton (1996) has given two, and the review of
the literature revealed the seven techniques for assessing
motivation level, each one involves a different aspect of motivation.
In addition to that, some studies have given combined aspects of
motivation including all motivational scales. As a result, all studies
of motivation combined into one to determine eligibility because it
was very difficult to distinguish the effects of different scales. So,
the simplicity of the model recognizes that there is a complexity
between these variables.

This study found few changes in motivation as a training outcome


(Cole and Latham, 1997; Frayne and Geringer, 2000). These
researchers used expectation as a measure of motivation and noted
significant improvement in post-training motivation. Two
motivational aspects have been placed in their training
effectiveness models as important effectiveness variables:
motivation for transferring and motivation for learning (Baldwin and
Ford, 1988; Holton, 1996; Holton and Baldwin, 2000; Tannenbaum

20
et al., 1993). In this way it become difficult to measure how training
outcomes affected by changes in motivation. In addition to that, this
approach can’t help training experts to study different aspects of
training content, design and the organizational culture that may
affect motivation for transferring or learning. As a result, a method
is required for assessing changes in motivation.

Active learning means conscious knowledge gain which can be


measured through a test that was taught during the training
process. Tannenbaum et al. (1993) explained it further, that the
conscious knowledge gain can include increases in the knowledge, a
change in composition of knowledge, or by both.

Cognitive outcomes expanded by Kraiger (2002) further emphasizes


on the self-knowledge, structural knowledge, executive control, and
problem solving. With the observation of cognitive learning and with
the other objectives of evaluation, as discussed earlier, an inverse
relationship is being found between cognitive learning and post-
training self-efficacy. Training outcome means the ability to use the
skill learnt during the training, and can be measured by observation
that a trainee can perform the skills gained in the training. Kraiger
(2002) has given two ways of training skills performance: one the
capability of imitating structural learnt behavior in the training and
second to enhance performance after practices with few errors.
Tannenbaum et al. (1993) says that the trainees may be able to
perform during training but may not be able to transfer these skills
on job. In this way performance during training would be better than
on the work performance (e.g., Salas, Ricci, and Cannon-Bowers,
1996). In this way, relationships between on training outcome and
objective of the evaluation, as mentioned earlier, training outcome
is affected by cognitive learning and post training behavior.

21
Result is the last and final dimension of training evaluation. It refers
to trainees’ quantifiable behavioral changes (Kraiger, 2002). For
example, organizational outcomes form the trainings’ transfer
performance may also include enhanced safety measures, morale,
efficiency, and quality/quantity of outcome.

2.4 KIRKPATRICK MODEL OF TRAINING EVALUATION

In the year 1952, Donald Kirkpatrick (1996) conducted research to


evaluate the performance of a training program. The main aim of
Kirkpatrick’s method was to measure the participants’ reaction while
execution of a program and the amount of learning that took place
in the form of changing behavior at workplace. The concept
Kirkpatrick measurement may further be divided into four levels.
While documenting the information on training in 1959, Kirkpatrick
(1996) comes on these four measurement levels of a training
evaluation. It is still unknown that how these four steps became
known as the Kirkpatrick Model which is recognized as most vital
instrument for all the organization (Kirkpatrick, 1998). This form of
literature is one of the frequently used forms in the process of
technical training as well as educational training. The first level of
Kirkpatrick’s measurement, reaction may be defined as how
efficiently the trainee can used the method for organizational
growth. The second measurement level explains that learning is
said to be the refined tool for increasing and determining how far
the knowledge, attitudes, and skills helped the trainees at work. The
training also defined as a propeller to boost a congenial atmosphere
and behavior of an employee. Behavior defines how the relationship
help in the process of learning should be articulated at work point.
Kirkpatrick believes there is a big gap among the technological
knowledge and the implementation of that on the job. The fourth
training measurement defines how to reduce cost and grievances so
that the profit of the organization level can be increased. Although

22
Kirkpatrick’s first level is least difficult, for measuring performance.
No studies can prove that the one method is suitable for all
application for evaluation of knowledge.

After forty years of regular use of classic Kirkpatrick Model, several


authors suggested that this is the most suitable for all types of
organization. Warr, Allan and Birdie (1999) evaluated a two-day
technical training course for checking the performance of 123
motor-vehicle technicians over a period of seven- month period to
check the longitudinal variation of the Kirkpatrick Model. The main
aim of studying this method was to demonstrate how the
performance of employees after training is going to be improved.
Warr et al. (1999) found that the levels in the Kirkpatrick Model are
correlated to each other. They consider six trainee features and one
organizational characteristic that might help in actual performance
and outcomes at every level of measurement. The trainee features
on their study so that their learning on work stations may increase
the confidence on the work and motivate other worker about the
learning task, strategies and other technical terms at various levels.
The one most important feature to evaluate the performance was
transfer at work point in view to make strategic change demanded
by the organization on the job.

Warr et al. (1999) studied the link between the modified Kirkpatrick
framework measurement levels to study the behavior and results on
job status. The three levels which were studied were reactions,
learning, and job behavior. Trainees were given all the appropriate
knowledge about the work .The questionnaire based information
was mailed after one month to review the performance on the basis
of information collected at all level. Later all questionnaire data
were transformed into another set of measurement level. The
reaction level was gathered after the training given for proper
functioning of work so that to know about their perceptions for

23
usefulness of the training and measure to eradicate problems of
training. The learning level was measured by all three
questionnaires. Since the main objective of training was to improve
and motivate employees towards the goal of the organization in
regards to the use of latest technology. Because experience comes
after passing of certain time on a particular work these researchers
measured the amount capability gained during the course of
learning. Change in scores was compared between and before
training and after training. Warr et al accepts that there is a
correlation between the six individual trainee and factors of
motivation. Correlation at work station helps to predict change in
training, Job behavior, and the desired objective of measurement
level on and after training. Multiple regression helps in analyzing
different level scores gained in the process of training so that a
strong relationship could be made among trainer and trainees

Warr et al. (1999) explained that relationship build between the six
individual trainee features on organizational predictors for
evaluation of performance at every level. At first level participants
are given prior training before going on actual work so that their
capacity can be measured after training. At the second level the
other factors like motivation, confidence, and strategy works in
bringing change within the system of learning. Learning level
reflects the changes which were strongly predicted after the process
of training at all level. Research suggests that there is a possible
link between reactions and learning so that it may be identified with
the use of more differences in the opinion level. In the third point
the training build confidence in trainee and helps in transfer support
to predicted job behavior. Transfer support was measured as a part
of organizational trend to coordinate for the satisfaction of the
organization. Transfer support is said to be the amount of support
given by trainers to their trainee and colleagues for improving the
quality of work in the organization. Warr et al. suggested that an

24
analysis of pretest scores might explain reasons for the behavior
and helps in the improvement of organizational behavior.

Belfield, Hywell, Bullock, Eynon, and Wall (2001) focuses on the


method for evaluating performance of medical educational
interventions for checking efficiency on healthcare by the
adaptation of the Kirkpatrick Model at all five levels. The five levels
are reaction, learning, behavior, participation, and result. However
the Kirkpatrick Model has been applied for years to solve the
problems arise on technical training; recently this model was
applied over nontraditional electronic learning system. Horton
(2001) published Evaluating E-Learning in which he explains how to
use the Kirkpatrick Model to evaluate e-learning. Kirkpatrick (1998)
suggests that there are as much possibility of the four levels for
evaluation of training .In order to make full use of organizational
resources there is a need of effective training and capable
manpower to execute the work in accordance wit the desired
objectives of the organization.

Trainers must form a discipline pattern for all its employees or


trainees in order to evaluate the performance with the result desired
for organization. Training evaluation is a diverse field. The process
of evaluation may be formative or summative (Eseryel, 2002), so
that to bring change in the evaluation of program. Kirkpatrick’s
(1994) often- explained about ‘Four Level Model of Training
Evaluation’, though which no organization can go for maximizing the
profit .It prescribes an efficiency of evaluation design based on four
key levels. It also advocates the measurement of participants in the
process of learning and the behavioral change in corresponds to the
organization. Criticisms are always made for the simplification in the
process of, learning at Kirkpatrick’s observation and its finding on
hierarchical relationship between trainee and staffs on all stages
(Holton, 1996; Kraiger, 2002). To reduce the complexity in the

25
evaluation of learners behavior at all four levels – in this article, we
use a ‘mid- range theory approach’ to focus only on one part of
Kirkpatrick’s four-stage framework.

The Kirkpatrick model is the most frequently used (Kraiger, 2002)


and perhaps the most influential in the field of evaluating
performance of employees at wirk (Eseryel, 2002). Kirkpatrick’s
model is based on four-dimensional typology which follows a
strategic framework for effective evaluation of employee’s
performance at work. As per his model, there are four basic levels
of evaluation i.e. –learning, behavior, reaction, and most important
results there are hierarchical coordination among every level.
Similarly a positive reaction helps in increasing the level of
understanding in regards to the objectives formed by the
organization. This model also helps experts to maintain the
performance for timely completion of work with the harmonization
of worker through evolution of training. (Eseryel, 2002): but,
question of follow-on logic is asked at each stage (Tannenbaum et
al, 1993; Holton, 1996).

Tannenbaum et al (1993) asked to show the link between reactions


occurred in the remaining three dimensions of Kirkpatrick’s model.
Certainly, in view, reactions to training and post-training methods
are related to any other way of assessment (Alvarez et al, 2004).
The notion of ‘post-training’ was also supported on the basis of the
nature of work and the demand of the market with the use of
evaluation of performance at training.
Kirkpatrick’s ‘behavior’ label was further studied and specified for
increasing the ‘training performance’ and ‘transfer performance’
through learning at work to raise the efficiency of employees at
work.

26
Holton (1996) accepted that primary outcomes should not be
considered as part reactions for the evaluation procedure – On the
other hand reactions are a benchmark for the suitability of a training
program. Holton (1996) says, the applicable evaluation objectives
are learning, learning should lead to transfer and then transfer
should lead to result. The evaluation of a training program is always
associated with its effectiveness (Alvarez et al, 2004). Evaluation
check does it works and effectiveness checks ‘why’ it works (Ford,
1997). Kraiger (2002) gives three multidimensional objectives for
training evaluation. The evaluation procedures include evaluation of
training design and system, changes in organizational and trainees’
outcomes. Changes in trainees may be cognitive, behavioural or
emotional. Organizational outcomes include the transfer culture,
results and job performance.

Kirkpatrick’s model is the initial for this study. Although, because of


the complexity in evaluating trainees on these four levels – reaction
to behavior – mid-range approach has been used (Pinder and Moore,
1979), and the focus is only the part of Kirkpatrick’s four-layered
evaluation structure – learning. Mid-range approach focuses on a
part of the structure and it allows for descriptive investigation.
Management training is valuable only if it brings positive change
and improvement in the individuals. So, the result of the training
which is measured is the positive change in the individuals.

Training effectiveness and evaluation model has been developed


after a thorough review of literature from 1992-2002, Alvarez et al
(2004). “Alvarez et al” has broadened Kirkpatrick’s influential model
the idea of learning evaluation. This model associates training plan
and content, changes in organization and trainees outcome. It
developed Kirkpatrick’s “learning” model further and identify it as
‘changes to learning’. As the mid-range theory approach, it searches
for the learning aspects in this model. It also measures what

27
changes in the trainees as they participates in the training
programme. Alvarez et al’s (2004) model divides these changes into
three parts: training performance post-training self value, and
cognitive learning. Alvarez et al defines post-training efficiency as a
post-training mind-set. Self value means an individual’s beliefs
about himself to perform a particular task and show self value and
confidence while performing the task.

This behavioural model has been transferred to the enterprise


related literature (Krueger and Carsrud, 1993) and at the same time
it is used in the training evaluation model for the enterprise training
(Fayolle et al, 2006). In the behavioral model (as adapted by Fayolle
et al, 2006), enterprise related objectives are influenced by three
factors: subjective norms, behavior, and expected behavioural
control. It is becoming a trend among companies to focus on the
entrepreneurship skills. Companies today need sustainable growth
in terms of competitiveness, performance and to organize their
organizations for innovation and entrepreneurial behavior. These
organizations are aggressive in exploring new opportunities and
come with a new product in the market and often initiate the
competitors to respond their actions. Thus, a strong training
evaluation is required by these enterprise related organizations.

2.5 SUMMARY

Training evaluation is an important part of any training programme,


as it helps to assess the real outcome from a training programme.
Training evaluation consider the changes in the trainees on the job
performance or after the training programme. However,
performance is not related directly to the training programme and it
is not the only parameter to judge the success or failure of a training
programme. To judge the outcome from a training programme,
training effectiveness and training evaluation model can play a
significant role. Training effectiveness and training evaluation are

28
two different techniques to assess a training programme outcome.
As training evaluation is associated with the training content and
training plan, training effectiveness is associated with whole
learning system. In brief, training evaluation provide a micro-view
on the training outcomes while training effectiveness provides
macro-view on the training outcomes. Donald Kirkpatrick has
suggested a model for training evaluation. Kirkpatrick presented a
four steps training measurement process, learning, behavior,
reaction, and results. It is widely used tool especially for measuring
technical training outcome. This model was reviewed by many
researchers; some of them found it most suitable.

29
Chapter 3

M E T H O D O L O GY

3.1 RESEARCH PHILOSOPHY

There are plentiful reasons why an indulgent of philosophical issues


is imperative whilst carrying out a research. This based on the
argument that it is the nature of philosophical questions that best
makes obvious the importance of acknowledging philosophy.
Recognizing the philosophy of the research enables a researcher to
proceed with the unfussy way and naive mode of questioning, as the
questions of the research might create disorder and flux in research
statements and ideas regarding the state of affairs that makes the
choice of philosophy of exceptional assistance (Smith, 2001). The
circuitousness and round nature of philosophical questioning in itself
is useful, as it most commonly pushes comprehensively thinking,
and produces additional questions in regard of the subject under
thought. Elucidating statements linked to personal values is as well
perceived as functional while planning a research (Hughes, 2000).

Moreover, Easterby-Smith et al (2003) make out three reasons why


the looking at philosophy could be noteworthy with exacting
suggestion to research methodology. The first reason is that
philosophy can help out the researcher to process and spell out the
research methods to be used in a research, that is, to spell out the
general research strategy to be applied. The second reason is that
comprehension of research philosophy will allow and help out the
researcher to assess diverse methodologies and methods and steer
clear of unsuitable use and needless work by making out the
limitations of specific approaches at an early on phase. The third

30
reason is that philosophy might help out the researcher to be
creative and pioneering in either choice or adaptation of methods
that were formerly exterior to experience. Putting these three
reasons at the center of research strategy, the researcher firstly
decided about the research philosophy to be applied so that nothing
inappropriate was taken into consideration.

Easterby-Smith et al (2003) classify research philosophy as


positivism and interpretivism. Positivism is a philosophical model
which limits real facts contained by the bounds of science on the
basis of prescribed judgment or arithmetic. It is based on the
principle that there is an objective realism and that facts exists as
something that can be experimental and calculated. A positivist
model normally entails quantitative research methods for collecting
and analyzing the data (Hughes, 2000). Interpretivism is a
philosophy chains the analysis that people and their institutions are
essentially dissimilar from the natural sciences. The examination of
the social world consequently necessitates an unusual approach and
attempts to recognize of human behaviour, an empathic
acknowledgement of human act. There is a vision that all research is
interpretive, that research is directed by the researcher’s set of
viewpoint and approach concerning the world and how it ought to
be acknowledged and researched. An interprevism model normally
entails qualitative research methods for collecting and analyzing the
data (Smith, 2001). Interpretive research methods are close to
disapproval since they hold variations of ontology, of manifold,
independently built although socially and culturally controlled
realisms. If realism is built it involves someone is active and
concerned in that process (Easterby-Smith et al, 2003). This is on
contrary to positivist approaches in which the researcher is
sovereign of realism. In an interpretivist philosophy, the researcher
is forever fraction of the realism they are attempting to realize.

31
So whilst deciding about the methodology of this research, the first
step was to decide about research philosophy. This was because the
research had to choose a suitable path and approach of the
research. The choice was open to choose from positivism and
interpretivism. But choosing the research philosophy was rather
concerned the nature of research problem, as whether the research
was scientific research or social research or the research was
experimental or exploratory. Certainly this research was exploratory
and not experimental. Notably, the questions of the research were
to examine the existing theories of evaluation of training
programmes on the whole and to explore a case study (IBM-Daksh)
on the relevance of an extensively established academic model
(Kirkpatrick Model) to evaluate training programs in Indian BPO
Industry. Therefore, the philosophy of this research was decided as
interpretivism where the facts regarding the above issues were
explored and interpreted in accordance with the research objectives
and research questions.

3.2 RESEARCH METHOD

Research methods are quantitative and qualitative. In quantitative


research, the researcher is preferably a purposeful observer that
neither contributes in nor controls what is being researched. In
qualitative research, on the other hand, it is attended that the
researcher can study the most regarding a state of affairs by taking
part and/or being engrossed in it (Creswell, 2002). These
fundamental principal statements of both methodologies lead and
progression the sorts of data collection methods put into application
(Patton, 2002). Normally qualitative data entails words and
quantitative data entails numbers, there are researchers who
experience that one is superior or more methodical than the other
(Punch, 2003). An additional most significant difference flanked by
the two is that qualitative research is inductive and quantitative

32
research is deductive (Patton, 2002). One of the most differentiating
points between qualitative and quantitative research is that in
qualitative research, a hypothesis is not required to start on
research. But, every one quantitative research necessitates a
hypothesis prior to research can start on.

Even though there are apparent distinctions amid qualitative and


quantitative methods, a few researchers uphold that the preference
between putting into application qualitative or quantitative methods
in fact has less to do with methodologies than it does with
positioning oneself in a specific order or research tradition. The
complexity of selecting a method is compounded by the reality that
research is generally allied with universities and other institutions.
The results of research tasks generally steer vital decisions
regarding precise practices and policies (Patton, , 002). The
preference of which method to put into application might echo the
interests of those carrying out or taking advantage from the
research and the objectives for which the results will be put into
application. Choice regarding which sort of research method to use
could as well be based on the researcher's own understanding and
penchant, the people being approached , the projected audience for
results , time, money, and additional resources obtainable (Creswell,
2002).

A few researchers suppose that qualitative and quantitative


methodologies cannot be shared since the beliefs essential to every
tradition are so very much dissimilar. Some other researchers
believe they can be applied in grouping just by alternating flanked
by methods qualitative research is fitting to reply definite kinds of
questions in definite circumstances and quantitative is right for
others. Moreover a few researchers suppose that both qualitative
and quantitative methods can be put into application concurrently
to respond a research question (Punch, 2003).

33
Once decided about the choi8ce of philosophy and discussed the
available research methods, the researcher was very much clear
about choosing the appropriate research method. Initially the
research had option to either choose only quantitative method or
qualitative method or both, but in the end there was only one choice
of method and it was qualitative method. As stated above when a
research is to be carried having interpretivism philosophy the
method of data collection and data analysis ought to be qualitative.
So this research was conducted putting into application the
qualitative method. The qualitative data collection and data analysis
was carried out to respond the developed research questions as
why measurement method of training program evaluation is
imperative and how effective is Kirkpatrick Model for methodical
evaluation of training; why is the need of training program
evaluation; what are various measurement methods of training
program evaluation; and how effective is Kirkpatrick Model for
methodical evaluation of training. In the further sections, the data
collection and data analysis tools are discussed and detailed.

3.3 DATA COLLECTION


3.3.1 Secondary Data

Secondary data is data collected for purposes other than the


achievement of a research task. A range of secondary data sources
is obtainable to the researcher collecting data on a particular
industry or company or a particular subject. Secondary data is as
well utilized to put on early approaching into the research problem,
in subjective approach (Robson, 2000). The two foremost
advantages of collecting and utilizing secondary data in a research
are time and cost savings. But the foremost disadvantages of
collecting and utilizing secondary data are questionability of
accuracy and reliability (Sekaran, 2003).

34
Nevertheless as a common ruling, a methodical research of the
secondary data ought to be carried out proceeding to carrying out
primary data. The secondary data offers a functional setting and
make out foremost questions and issues that require to be attended
by the primary data. Secondary data is classified in relation to its
source either internal or external. Internal data is secondary data
obtained within the group where research is being carried out.
External secondary data is get from exterior sources (Robson,
2000).

In this research as well the importance of collecting and using


secondary data linked to preparing ground work for collecting and
using primary data. Secondary data helped out the researcher to
develop propositions or assumptions based on which the primary
data was collected approaching directly the HR Managers of IBM-
Daksh. Although the researcher had choice to collect both internal
and external secondary data, but considering the limited time and
resources, it was though appropriate to use only external sources of
secondary data. These external sources were prominently books
and journals relating to HRM and particularly training.

3.3.2 Primary Data

Primary data collection is related to data collected from a primary or


first hand source, explicit to the research field. On contrary,
secondary data generally acclimatizes data from extra accessible
researches, in a few cases extrapolating or interpolating such data
(Robson, 2000). The primary data collection is characteristically
thought more correct than the use of secondary data; nevertheless,
this is just true if it has been collected utilizing a resonance research
design and suitable data collection method. Although the data
collection process is generally professed as straight forward
comparing the forming or amalgamating phases of analysis that go

35
after it, there are many factors which might be effortlessly
unnoticed at this vital first phase (Sekaran, 2003).

There are a range of primary data sources where the prominent are
focus group, interview, questionnaire and observation. However,
questionnaire is the cheapest, efficient and most frequently used
primary data collection method. A questionnaire is a set of written
questions relating to the problem or issues under research for which
the researcher necessitates answers from respondents (Sekaran,
2003). Formulating of a questionnaire plays a vital role in rewarding
the points of primary data collection. Questionnaire design is a
lengthy process that requires persistence and reasonable analysis. It
is an influential and well-organized assessment method and must
not be taken flippantly. Designing of questionnaire must be
performed in a phased approach (Robson, 2000).

Considering the usefulness and effectiveness of questionnaire in


primary data collection, primary data was collected in this research
using questionnaire. The questionnaire designing or formulation
went through a number of phases in order to ensure that the
researcher was proceeding in the right direction. In the very first
phase the objective of research was properly defined and as well
the target population from whom the researcher was going to
collect primary data. In the second phase, the decision was taken
regarding the type of questions the researcher was going to use i.e.
close ended, open ended or combination of both. Only close-ended
questions were formulated for the questionnaire. Some 10 questions
were included in the questionnaire. Another important step in this
direction was carrying out a pilot survey to test the questionnaire.

3.3.3 Sampling

In a questionnaire based research, the researcher has to select the


target people specific for achieving the research objectives ands

36
answering the research questions. This is called sampling. Sampling
techniques are classified as probability and non-probability. In
probability sampling, the first step is to choose the population of
interest, that is, the population the researcher looks for the results
about. The sample may well be chosen in numerous stages.
Understandably the probability of getting every sample the
researcher chooses, can as well work out a sampling error for the
results (Punch, 2003). On the other hand, non-probability sampling
is a sampling technique in which the samples are chosen in a
process that does not offer every one of the individuals in the
population equivalent probability of being chosen (Punch, 2003).

The researcher had choice either to go for probability sampling


technique or non-probability sampling technique for approaching
the target people in order to collect primary data through
questionnaire. The distinction was very clear as the research initially
had no idea about the probable target people. Therefore choosing
non-probability sampling was an obvious choice. The researcher
firstly approached some senior HR Managers in Delhi and NCR
based IBM-Daksh corporate office and with the help of then the HR
Managers as responds were selected on grouping basis. Finally 25
HR Managers of IBM-Daksh were handed over questionnaire and
after responding the questions they submitted the questionnaire
with enthusiasm. The task was tough but it ended happily and
satisfactorily.

3.4 DATA ANALYSIS

Triangulation was found the most fitting data analysis method for
this qualitative research. Triangulation is a method applied in
qualitative research to confirm and ascertain validity of the
research. Triangulation data analysis can be carried through five

37
types of methods explicitly data triangulation, investigator
triangulation, theory triangulation, methodological triangulation, and
environmental triangulation (Marshall and Rossman, 2002).
However, the most common mode of triangulation applied in
academic researches is data triangulation.

Data triangulation in this research entailed the application of


diverse sources of data/information. A key strategy was to classify
each cluster or sort of data for the agenda that the researcher is
examining. After that, there was included a similar number of
people from each data group for answering the research questions
and achieving research objectives.

38
Chapter 4

FINDINGS AND ANALYSIS

4.1 INTRODUCTION

The aim of this research was firstly examine the existing theories of
evaluation of training programmes on the whole and secondly
explore a case study (IBM-Daksh) on the relevance of an
extensively established academic model (Kirkpatrick Model) to
evaluate training programs in Indian BPO Industry. The research
attempted to get dome following research questions: why
measurement method of training program evaluation is imperative
and how effective is Kirkpatrick Model for methodical evaluation of
training; why is the need of training program evaluation; what are
various measurement methods of training program evaluation; and
how effective is Kirkpatrick Model for methodical evaluation of
training. The data analysis or analysis of the findings in this research
gets done these research questions.

4.2 ANALYSIS OF FINDINGS

As per the research literature, in today’s world training evaluation is


a much debatable topic in literature, as said by Burrow and
Berardinelli (2003) who state that “every little receiving on the
training part are given much attention as evaluation. Organisations
accept the value of training as important investments of time and
money for success of any organization. This logic is also accepted by
Lingham et al. (2006) who believes that training helps in simplifying
relations on the work front. Training validation is different to that of
training evaluation. Validation may be defined as a process of
checking people at work for completion of specific work. Easterby-
Smith and Mackness (1992) stressed on four purposes for
evaluation. (1) The organization must have certain obligation to
perform in the course of certain outcomes and consequences. (2)

39
The organisation must have certain standard on which the quality
of work may be improved by using data gathered from the
evaluation. (3) The purpose of training help participants to recognize
the true value of task they are assigned for. Easterby-Smith and
Mackness stressed on the purposes and importance of training
cycles on different stakeholders. In the light of these propositions, it
was examined in this research as for what principal purpose training
evaluation is needed for IBM-Daksh. The data collected in this
context reveals that training evaluation is needed for IBM-Daksh for
the principal purposes of training results and implementation control
(see table and figure 4.1). As for majority of the total research
participants, they find that their firm needs training evaluation for
the principal purposes of training results and implementation
control .

Table 4.1:

No. of Response in Cumulative


Variable
Respondents Percentage Percentage
Training results 8 32% 32%
Skill development 4 16% 48%
Business goals 4 16% 64%
Implementation
9 36% 100%
control

As per the data shown in the above table for majority of the total
respondents (68% of the total 25), they find that their firm needs
training evaluation for the principal purposes of ‘training results’
(32%) and ‘implementation control’ (36%); whereas for the
remaining respondents (32%), the find that their firm needs training
evaluation for the principal purposes of ‘skill development’ (16%)

40
and ‘business goals’ (16%). By and large, these data conclude that
training evaluation is needed for IBM-Daksh for the principal
purposes of training results and implementation control.

Moreover, as per the research literature, the purpose of training is


to calculate the rate of change found by the use of knowledge and
learning at workplace. Numerous models are given by researchers
for training evaluation in the literature Abernathy (1999) states but
the most famous model of training evaluation is given in the
literature by the American Professor Donald L Kirkpatrick (1975)
which was initially created in 1959. This method is certified by
Nickols (2005) who said that “Current method of evaluating training
is derived by the Kirkpatrick Model”. This is further affirmed by
Canning (1996 p. 5) who said this model of training is “Rather like a
reptile, it brings incremental changes in the process of training.
Chen and Rossi (2005) explain that the evaluation of literature
depends more on the practices. The example says that most of the
training is taken from the Kirkpatrick model. But at present is based
on the market demand. The information for evaluation at this level
is usually collected through the methods of questionnaire at the end
of the training program. In the light of these propositions, it was
examined in this research as whether training evaluation of IBM-
Daksh should be centrally focused towards measuring changes in
knowledge and appropriate knowledge transfer. The data collected
in this context reveals that certainly training evaluation of IBM-
Daksh should be centrally focused towards measuring changes in
knowledge and appropriate knowledge transfer (see table and
figure 4.2). As research participants in greater majority either
strongly agree or agree to the fact that training evaluation of
IBM-Daksh should be centrally focused towards measuring changes
in knowledge and appropriate knowledge transfer.

41
Table 4.2:

No. of Response in Cumulative


Variable
Respondents Percentage Percentage
Strongly agree 11 44% 44%
Agree 7 28% 72%
Disagree 5 20% 92%
Strongly disagree 2 8% 100%

As per the data shown in the above table for respondents in greater
majority of the total (72% of the total), they either ‘strongly agree’
(44%) or ‘agree’ (28%) of the fact that training evaluation of their
firm should be centrally focused towards measuring changes in
knowledge and appropriate knowledge transfer; whereas for the
remaining respondents (28%), they either ‘disagree’ (20%) or
conclude that certainly training evaluation of IBM-Daksh should be
centrally focused towards measuring changes in knowledge and
appropriate knowledge transfer.

In addition, as per the research literature, Bramley (1999), a noted


writer of evaluation theory, explains how the Kirkpatrick model
focuses on four levels of training to measure the real growth of the
organization like as learning principles, of facts, skills attitudes and
behavior involved in the process of training. Bramley states that
most organizations carry out functions of evaluation at the initial
level for measuring the capability of workforce in gaining technical
skills. In addition he further argues on few changes in the
organization to measure changes in the behavior of employee. After
initial training organisations feel it more convenient in evaluating
the performance of its employees during the end of the day or
activity - rather than regular follow-up action. In the light of these

42
propositions, it was examined in this research as training evaluation
of IBM-Daksh should be measured through which performance
criteria. The data collected in this context reveals that training
evaluation of IBM-Daksh should be measured through the criteria of
qualitative performance and not through the quantitative
performance (see table and figure 4.3). As for majority of the
total research participants they find that training evaluation of their
firm should be measured through the criteria of qualitative
performance.

Table 4.3:

No. of Response in Cumulative


Variable
Respondents Percentage Percentage
Qualitative performance 16 64% 64%
Quantitative performance 9 36% 100%

As according to the data shown in the above table for majority of


the total respondents (64% of the total 25), they find that training
evaluation of their firm should be measured through the criteria of
‘qualitative performance’; whilst for the remaining respondents
(36%), they find that training evaluation of their firm should be
measured through the criteria of ‘quantitative performance’.
Overall, these data conclude that training evaluation of IBM-Daksh
should be measured through the criteria of qualitative performance
and not through the quantitative performance.

Furthermore, in accordance with the research literature , it could be


argued that the literature reviewed so far it seems that the training
evaluation is relatively easy to implement, produces organizational change
outcomes, assists in meeting the needs of training, as well as reduce
costs. Lack of accountability is treated as a big barrier for effective

43
evaluation. This was focused by Rae (1999) that effective evaluation
needs a “training quintet” to make senior management, aware with
the new technique used for progress of a company. Rae also
stressed on the view that senior management should must be
authorized to take an active part in the organization for improving
results so that to make a congenial atmosphere in the organization
for proper functioning. If the organization doesn’t have a good
culture to encourage its employee then the evaluation part will
create a problem in the organization to find out information
regarding drawback in the system. Culture may be termed as an
obstacle to increase the performance of employees (Holton, 1996;
Holton et al., 2000). The State government of Louisiana founds that
culture have an adverse impact over the performance based
training. Reinhardt (2001) focuses that in her research on
identifying barriers to measure the performance of employee at
work in learning. Reinhart also highlights the loopholes in the
organizational set-up as a important aspect to measure the impact
on performance. This issue of culture is termed as an imperative
barrier. In the light of these propositions, it was examined in this
research as what is the major challenge of training evaluation for
IBM-Daksh. The data collected in this context reveals that lack of
accountability is the major challenge of training evaluation for IBM-
Daksh (see table and figure 4.4). As for majority of the total
research participants, lack of accountability’ is the major challenge
of training evaluation for IBM-Daksh.

Table 4.4:

No. of Response in Cumulative


Variable
Respondents Percentage Percentage
Lack of accountability 15 60% 60%
Cultural resistance 10 40% 100%

44
As according to the data shown in the above table, for majority of
the total respondents (60% of the total 25), ‘lack of accountability’ is
the major challenge of training evaluation for their firm; whilst for
the remaining (40%), ‘cultural resistance’ is the major challenge of
training evaluation for their firm. Overall, these data conclude that
lack of accountability is the major challenge of training evaluation
for IBM-Daksh.

Moreover, as per the research literature, recently from many


contributions in the field of training literature, training effectiveness
and evaluation have acknowledged significant attention (Holton,
2003; Holton and Baldwin, 2000; Kraiger, 2002; Torres and Preskill,
2001). One of them is the development of Kirkpatrick’s recent four-
layered assessment technique (Holton, 1996; Kraiger, 2002) and
board theoretical technique of training effectiveness (Holton, 1996;
Tannenbaum et al., 1993) has come forward as a vital works.
Although, one of these evaluation methods was developed 10 years
ago, post-training behaviour, has not been incorporated into these
evaluation methods. Additionally, no update has been done for
many years, of the variables believed to part a good role in the
training effectiveness. Measurement of learning outcomes through a
practical method is called training evaluation. On the other hand a
theoretical method to measure training outcomes is called training
effectiveness. Training evaluation provides micro-view of a training
program as it focuses only on learning outcomes. On the contrary,
training effectiveness highlights the whole learning system and
provides a macro-view of the training outcomes. Training evaluation
try to find out the benefits of a training program to individuals as
learning and increased job performance. In the light of these
propositions, it was examined in this research as whether training
evaluation of IBM-Daksh should be methodological approach of

45
measuring learning outcomes. The data collected in this context
reveals that certainly training evaluation of IBM-Daksh should be
methodological approach of measuring learning outcomes (see
table and figure 4.5). As for research participants in greater
majority, they either strongly agree or agree to the fact that training
evaluation of IBM-Daksh should be methodological approach of
measuring learning outcomes.

Table 4.5:

No. of Response in Cumulative


Variable
Respondents Percentage Percentage
Strongly agree 10 40% 40%
Agree 9 36% 76%
Disagree 6 24% 100%
Strongly disagree 0 0% 100%

As per the data shown in the above table for respondents in greater
majority of the total (76% of the total 25), they either ‘strongly
agree’ (40%) or ‘agree’ (36%) of the fact that training evaluation of
their firm should be methodological approach of measuring learning
outcomes; whereas for the remaining respondents (24%), they
‘disagree’ to the fact that training evaluation of their firm should be
methodological approach of measuring learning outcomes. By and
large, these data conclude that certainly training evaluation of IBM-
Daksh should be methodological approach of measuring learning
outcomes.

Above and beyond , in accordance with the research literature,


evaluation of a training program is the assessment of its
success/failure in of views its design and content, changes in

46
organizational, and learners’ productivity. The training evaluations
methods used to review depends on the evaluation model, as there
are four models for evaluation. First of them, Kirkpatrick’s behavior,
learning, reactions, and results typology, is the easiest and
frequently used technique for reviewing and understanding training
evaluation. In this four dimensional method, learning outcome is
measured at the time of training which mean behavioral attitudinal,
and cognitive learning. Behavioral learning measures on-the-job
performance after the training. The second model expanded by
Tannenbaum et al. (1993), on Kirkpatrick’s four dimensional
typology, he included two more aspects, post training attitude and
further dividing behavior into two training outcome for evaluation;
transfer performance and training performance. In this extended
model, training reactions and post-training attitudes are not
associated with any other evaluation. No doubt that learning is
associated with training program performance, and training
program performance is associated with training transfer
performance, and transfer performance is associated with training
outcomes. In the light of these propositions, it was examined in this
research as which dimension of training evaluation target area
needs to be centrally focused by IBM-Daksh. The data collected in
this context reveals that changes in learners and organizational
payoff as dimensions of training evaluation target area needs to be
centrally focused by IBM-Daksh (see table and figure 4.6). As for
research participants in greater majority, they find that changes in
learners and organisational payoff as dimensions of training
evaluation target area needs to be centrally focused by IBM-Daksh.

47
Table 4.6:

No. of Response in Cumulative


Variable
Respondents Percentage Percentage
Learning content and
5 20% 20%
design
Changes in learners 11 44% 64%
Organisational payoff 9 36% 100%

As per the data shown in the above table for respondents in greater
majority of the total (80% of the total 25), they find that ‘changes in
learners’ (44%) and ‘organisational payoff’ (36%) as dimensions of
training evaluation target area needs to be centrally focused by
their firm; whereas for the remaining respondents (20%), they find
that ‘learning content and design’ as dimension of training
evaluation target area needs to be centrally focused by their firm.
By and large, these data conclude that changes in learners and
organizational payoff as dimensions of training evaluation target
area needs to be centrally focused by IBM-Daksh.

Furthermore, as per the research literature, Holton (1996) included


three more evaluation objects: transfer, learning, and results. Holton
doesn’t consider reactions because reactions are not a primary
effect of training program; to a certain extent, reactions are the
moderating or mediating variable between trainees’ actual learning
and motivation for learning. This model relates learning to transfer
and transfer to results. Additionally, Holton has different opinion for
the combination of effectiveness and evaluation. Because of that in
his model specific effectiveness variables are highlighted as
important aspects for assessments at the time of evaluation of
training outcomes. The last and fourth evaluation method was

48
developed by Kraiger (2002). This model stress on three
multidimensional areas for evaluation: changes in trainees (i.e.,
cognitive, behavioral, and affective) training design and system (i.e.,
design, validity, and delivery of training), and organizational
outcome (i.e., results, job performance, and transfer climate).
Feedback from the trainees is considered an assessment technique
for measuring how effective a training program design and system
were for the learner. Kraiger stated, feedback measures are not
associated with the changes in trainees/employees or organizational
outcomes, but those changes or leanings in employees are
associated with organizational outcomes. In the light of these
propositions, it was examined in this research as how effective is
Kirkpatrick Model of training evaluation for IBM-Daksh. The data
collected in this context reveals that Kirkpatrick Model of training
evaluation is highly effective for IBM-Daksh (see table and figure
4.7). As for research participants in greater majority, they find that
Kirkpatrick Model of training evaluation is highly effective for IBM-
Daksh.

Table 4.7:

No. of Response in Cumulative


Variable
Respondents Percentage Percentage
Highly effective 18 72% 72%
Reasonably effective 7 28% 100%
Ineffective 0 0% 100%

As according to the data shown in the above table for respondents


in greater majority of the total (72% of the total 25), they find that
Kirkpatrick Model of training evaluation is ‘highly effective’ for their

49
firm; whilst for the remaining respondents (28%), that find that
Kirkpatrick Model of training evaluation is ‘reasonably effective’ for
their firm. Overall, these data conclude that Kirkpatrick Model of
training evaluation is highly effective for IBM-Daksh.

In addition, as per the research literature, The main aim of


Kirkpatrick’s method was to measure the participants’ reaction while
execution of a program and the amount of learning that took place in the
form of changing behavior at workplace. The concept Kirkpatrick
measurement may further be divided into four levels. It is still unknown
that how these four steps became known as the Kirkpatrick Model
which is recognized as most vital instrument for all the organization
(Kirkpatrick, 1998). This form of literature is one of the frequently
used forms in the process of technical training as well as
educational training. The first level of Kirkpatrick’s measurement,
reaction may be defined as how efficiently the trainee can used the
method for organizational growth. The second measurement level
explains that learning is said to be the refined tool for increasing
and determining how far the knowledge, attitudes, and skills helped
the trainees at work. The training also defined as a propeller to
boost a congenial atmosphere and behavior of an employee.
Behavior defines how the relationship help in the process of learning
should be articulated at work point. Kirkpatrick believes there is a
big gap among the technological knowledge and the
implementation of that on the job. The fourth training measurement
defines how to reduce cost and grievances so that the profit of the
organization level can be increased. In the light of these
propositions, it was examined in this research as which dimension of
Kirkpatrick Model training evaluation requires to be most focused by
IBM-Daksh in order to get the desired results in relation to training
evaluation. The data collected in this context reveals that learning
and results as dimensions of Kirkpatrick model training evaluation
require to be most focused by IBM-Daksh in order to get the desired

50
results in relation to training evaluation (see table and figure
4.8). As for research participants in majority), they find that
learning and results as dimensions of Kirkpatrick Model training
evaluation require to be most focused by their firm in order to get
the desired results in relation to training evaluation.

Table 4.8:

No. of Response in Cumulative


Variable
Respondents Percentage Percentage
Learning 9 36% 36%
Reactions 4 16% 52%
Behaviour 4 16% 68%
Results 8 32% 100%

As according to the data shown in the above table for respondents


in majority of the total (68% of the total 25), they find that ‘learning’
(36%) and ‘results’ (32%) as dimensions of Kirkpatrick Model
training evaluation require to be most focused by their firm in order
to get the desired results in relation to training evaluation; whilst for
the remaining respondents (32%), ‘reactions’ (16%) and ‘behaviour’
(16%) as dimensions of Kirkpatrick model training evaluation require
to be most focused by their firm in order to get the desired results in
relation to training evaluation. Overall, these data conclude that
learning and results as dimensions of Kirkpatrick model training
evaluation require to be most focused by IBM-Daksh in order to get
the desired results in relation to training evaluation.

Additionally, as per the research literature, Belfield, Hywell, Bullock,


Eynon, and Wall (2001) focuses on the method for evaluating
performance of medical educational interventions for checking
efficiency on healthcare by the adaptation of the Kirkpatrick Model

51
at all five levels. The five levels are reaction, learning, behavior,
participation, and result. However the Kirkpatrick Model has been
applied for years to solve the problems arise on technical training;
recently this model was applied over nontraditional electronic
learning system. Horton (2001) published Evaluating E-Learning in
which he explains how to use the Kirkpatrick Model to evaluate e-
learning. Kirkpatrick (1998) suggests that there are as much
possibility of the four levels for evaluation of training .In order to
make full use of organizational resources there is a need of effective
training and capable manpower to execute the work in accordance
wit the desired objectives of the organization. In the light of these
propositions, it was examined in this research as how effective is
Kirkpatrick model training evaluation in evaluating e-learning for
IBM-Daksh. The data collected in this context reveals that certainly
Kirkpatrick model training evaluation is highly effective in evaluating
e-learning of IBM-Daksh (see table and figure 4.9). As for
research participants in majority, Kirkpatrick model training
evaluation is highly effective in evaluating e-learning of IBM-Daksh.

Table 4.9:

No. of Response in Cumulative


Variable
Respondents Percentage Percentage
Highly effective 16 64% 64%
Effective 7 28% 92%
Ineffective 2 8% 100%

As according to the data shown in the above table for respondents


in majority (64% of the total 25), Kirkpatrick model training
evaluation is ‘highly effective’ in evaluating e-learning of their firm;
whilst for the remaining respondents (36%), Kirkpatrick model
training evaluation is either just ‘effective’ (28%) or ‘ineffective’
(8%) in evaluating e-learning of their firm. Overall, these data

52
conclude that certainly Kirkpatrick model training evaluation is
highly effective in evaluating e-learning of IBM-Daksh.

As well, as per the research literature, for the evaluation of training


to be effective, it is important that the following considerations must
be kept in mind for achievement of organizational goal. The training
process must be done in a certain time frame with the objectives, to
fulfill the vacant post. The trainer must review the time period that
after the participants have learned the training when they are
expected to return to work. After coming back to the actual work
field there should be no confusion left with then for the execution of
work. The nature and scope of training depends on the effective
return on investment of the company in regards to the satisfactory
achievement of organizational goal. The evolution of performance in
almost every organization is a regular phenomenon in evaluating
the performance of employees. Chen and Rossi (2005) explain that
the evaluation of literature depends more on the practices. The
example says that most of the training is taken from the Kirkpatrick
model. But at present is based on the market demand. The
information for evaluation at this level is usually collected through
the methods of questionnaire at the end of the training program. In
the light of these propositions, it was examined in this research as
whether Kirkpatrick model training evaluation is cost effective and
efficient in controlling staff turnover. The data collected in this
context reveals that definitely Kirkpatrick model training evaluation
is cost effective and efficient in controlling staff turnover in IBM-
Daksh (see table and figure 4.10). As for research participants in
majority, they find that Kirkpatrick Model training evaluation is cost
effective and efficient in controlling staff turnover for IBM-Daksh.

53
Table 4.10:

No. of Response in Cumulative


Variable
Respondents Percentage Percentage
Yes 15 60% 60%
No 10 40% 100%

As per the data shown in the above table for respondents in majority
of the total (60% of the total 25), they find that Kirkpatrick Model
training evaluation is cost effective and efficient in controlling staff
turnover for their firm; whereas for the remaining respondents
(40%), they do not find that Kirkpatrick model training evaluation is
cost effective and efficient in controlling staff turnover for their firm.
By and large, these data conclude that definitely Kirkpatrick model
training evaluation is cost effective and efficient in controlling staff
turnover in IBM-Daksh.

4.3 SUMMARY

Training evaluation is needed for IBM-Daksh for the principal


purposes of training results and implementation control. Certainly
training evaluation of IBM-Daksh should be centrally focused
towards measuring changes in knowledge and appropriate
knowledge transfer. Training evaluation of IBM-Daksh should be
measured through the criteria of qualitative performance and not
through the quantitative performance. Lack of accountability is the
major challenge of training evaluation for IBM-Daksh. Certainly
training evaluation of IBM-Daksh should be methodological
approach of measuring learning outcomes. Changes in learners and
organizational payoff as dimensions of training evaluation target

54
area needs to be centrally focused by IBM-Daksh. Kirkpatrick Model
of training evaluation is highly effective for IBM-Daksh. Learning and
results as dimensions of Kirkpatrick model training evaluation
require to be most focused by IBM-Daksh in order to get the desired
results in relation to training evaluation. Certainly Kirkpatrick model
training evaluation is highly effective in evaluating e-learning of
IBM-Daksh. Definitely Kirkpatrick model training evaluation is cost
effective and efficient in controlling staff turnover in IBM-Daksh.

Chapter 5

CONCLUSION

5.1 INTRODUCTION

The aim of this research was firstly examine the existing theories of
evaluation of training programmes on the whole and secondly
explore a case study (IBM-Daksh) on the relevance of an
extensively established academic model (Kirkpatrick Model) to
evaluate training programs in Indian BPO Industry. The research
has achieved following research objectives: to assess the need of
training program evaluation; to identify and evaluate various

55
measurement methods of training program evaluation; and to
assess the effectiveness of Kirkpatrick Model for methodical
evaluation of training.

5.2 SUMMARY OF FINDINGS

Firstly, it was examined in this research as for what principal


purpose training evaluation is needed for IBM-Daksh. The data
collected in this context reveals that training evaluation is needed
for IBM-Daksh for the principal purposes of training results and
implementation control. As for majority of the total research
participants, they find that their firm needs training evaluation for
the principal purposes of training results and implementation
control . Further, it was examined in this research as whether
training evaluation of IBM-Daksh should be centrally focused
towards measuring changes in knowledge and appropriate
knowledge transfer. The data collected in this context reveals that
certainly training evaluation of IBM-Daksh should be centrally
focused towards measuring changes in knowledge and appropriate
knowledge transfer. As research participants in greater majority
either strongly agree or agree to the fact that training evaluation
of IBM-Daksh should be centrally focused towards measuring
changes in knowledge and appropriate knowledge transfer.

Moreover, it was examined in this research as training evaluation of


IBM-Daksh should be measured through which performance criteria.
The data collected in this context reveals that training evaluation of
IBM-Daksh should be measured through the criteria of qualitative
performance and not through the quantitative performance. As for
majority of the total research participants they find that training
evaluation of their firm should be measured through the criteria of
qualitative performance. Further, it was examined in this research
as what is the major challenge of training evaluation for IBM-Daksh.
The data collected in this context reveals that lack of accountability

56
is the major challenge of training evaluation for IBM-Daksh. As for
majority of the total research participants, lack of accountability’ is
the major challenge of training evaluation for IBM-Daksh.

Furthermore, it was examined in this research as whether training


evaluation of IBM-Daksh should be methodological approach of
measuring learning outcomes. The data collected in this context
reveals that certainly training evaluation of IBM-Daksh should be
methodological approach of measuring learning outcomes. As for
research participants in greater majority, they either strongly agree
or agree to the fact that training evaluation of IBM-Daksh should be
methodological approach of measuring learning outcomes. Further,
it was examined in this research as which dimension of training
evaluation target area needs to be centrally focused by IBM-Daksh.
The data collected in this context reveals that changes in learners
and organizational payoff as dimensions of training evaluation
target area needs to be centrally focused by IBM-Daksh. As for
research participants in greater majority, they find that changes in
learners and organisational payoff as dimensions of training
evaluation target area needs to be centrally focused by IBM-Daksh.

Besides, it was examined in this research as how effective is


Kirkpatrick Model of training evaluation for IBM-Daksh. The data
collected in this context reveals that Kirkpatrick Model of training
evaluation is highly effective for IBM-Daksh. As for research
participants in greater majority, they find that Kirkpatrick Model of
training evaluation is highly effective for IBM-Daksh. Further, it was
examined in this research as which dimension of Kirkpatrick Model
training evaluation requires to be most focused by IBM-Daksh in
order to get the desired results in relation to training evaluation. The
data collected in this context reveals that learning and results as
dimensions of Kirkpatrick model training evaluation require to be
most focused by IBM-Daksh in order to get the desired results in
relation to training evaluation. As for research participants in

57
majority), they find that learning and results as dimensions of
Kirkpatrick Model training evaluation require to be most focused by
their firm in order to get the desired results in relation to training
evaluation.

Finally, it was examined in this research as how effective is


Kirkpatrick model training evaluation in evaluating e-learning for
IBM-Daksh. The data collected in this context reveals that certainly
Kirkpatrick model training evaluation is highly effective in evaluating
e-learning of IBM-Daksh. As for research participants in majority,
Kirkpatrick model training evaluation is highly effective in evaluating
e-learning of IBM-Daksh. Further, it was examined in this research
as whether Kirkpatrick model training evaluation is cost effective
and efficient in controlling staff turnover. The data collected in this
context reveals that definitely Kirkpatrick model training evaluation
is cost effective and efficient in controlling staff turnover in IBM-
Daksh. As for research participants in majority, they find that
Kirkpatrick Model training evaluation is cost effective and efficient in
controlling staff turnover for IBM-Daksh.

5.2 MANAGERIAL IMPLICATIONS

Learning and results as dimensions of Kirkpatrick model training


evaluation require to be most focused by IBM-Daksh in order to get
the desired results in relation to training evaluation.

Kirkpatrick’s model is based on four-dimensional typology which


follows a strategic framework for effective evaluation of employee’s
performance at work. As per his model, there are four basic levels
of evaluation i.e. –learning, behavior, reaction, and most important
results there are hierarchical coordination among every level.
Similarly a positive reaction helps in increasing the level of

58
understanding in regards to the objectives formed by the
organization (Kraiger, 2002)

Kirkpatrick model training evaluation is highly effective in evaluating


e-learning of IBM-Daksh. Belfield, Hywell, Bullock, Eynon, and Wall
(2001) focuses on the method for evaluating performance of
medical educational interventions for checking efficiency on
healthcare by the adaptation of the Kirkpatrick Model at all five
levels. The five levels are reaction, learning, behavior, participation,
and result. However the Kirkpatrick Model has been applied for
years to solve the problems arise on technical training; recently this
model was applied over nontraditional electronic learning system.
Kirkpatrick Model training evaluation is cost effective and efficient in
controlling staff turnover for IBM-Daksh. Kirkpatrick acknowledged a
difference between knowing the principles and techniques, and using
these principles and methods of work. The fourth dimension-level results
expected results most training evaluation programs, such as reduced
costs, reduced turnover and absenteeism, reducing complaints, improving
profits and morale, and improve their quality and quantity (Chen and Rossi
, 2005) .

59
BIBLIOGRAPHY

Abernathy, D. (1999), Thinking outside the evaluation box. Training


and Development, 53(2), 18-24.
Alvarez, K., Salas, E., and Garofano, C. (2004), ‘An integrated model
of training evaluation and effectiveness’, Human Resource
Development Review, Vol 3, No 4, pp 385–416.
Anita, P. B., Becky, F. A. and Mavin, M.(2006), Supervisor–Team
Training: Issues in Evaluation, University of Berkeley, USA,
Baldwin, T. T.,&Ford, J. K. (1988), “Transfer of training:Areviewand
directions for future research”, Personnel Psychology, 41, 63-
105.
Belfield, C., Hywell, T., Bullock, A., Eynon, R., & Wall, D. (2001),
“Measuring effectiveness for best evidence medical education:
A discussion”, Medical Teacher, 23(2), 164-170.
Bramley, P. (1999), Evaluating Training, IPD House, London.
Canning, R. (1996), Journal of European Industrial Training, 20, 3-
10.

Chen, H.-T. and Rossi, P.H. (2005),Using Theory to Improve Program


and Policy Evaluations, Greenwood Press, New York.

Cole, N. D., & Latham, G. P. (1997), “Effects of training in procedural


justice on perceptions of disciplinary fairness by unionized
employees and disciplinary subject matter experts”, Journal of
Applied Psychology, 82, 699-705.
Creswell, J. W. (2002), Research Design: Qualitative, Quantitative,
and Mixed Methods Approaches. Thousand Oaks, CA: Sage
Publications
Easterby-Smith M et al (2003), Management Research: an
Introduction. London, Sage

60
Easterby-Smith, M. and Mackness, J. (1992), Personnel
Management, 42-45.
Eseryel D (2002), “Approaches to evaluation of training: theory and
practice”, Educational Technology and Society, Vol 5, No 2 pp
93–98.
Fayolle, A., Gailly, B., and Lassas-Clerc, N. (2006), “Assessing the
impact of entrepreneurship education programmes: a new
methodology”, Journal of European Industrial Training, Vol 30,
No 9, pp 701–720.
Ford, J.K. (1997), “Advances in training research and practice: an
historical perspective”, in Ford, J.K., Kozlowski, S., Kraiger, K.,
Salas, E., and Teachout, M., eds, Improving Training
Effectiveness in Work Organizations, Lawrence Erlbaum
Associates, Mahwah, NJ.
Frayne, C. A., & Geringer, J. M. (2000), “Self-management training
for improving job performance: A field experiment involving
salespeople”, Journal of Applied Psychology, 85, 361-372.
Holton, E. F., III, & Baldwin, T. T. (2000), Making transfer happen: An
action perspective on learning transfer systems. In E. F.
Holton, S. S. Naquin,&T. T. Baldwin (Eds.), Managing and
changing learning transfer systems: Advances in developing
human resources #8 (pp. 1-6). San Francisco, CA: Berrett-
Koehler.
Holton, E. F., III, Bates, R. A., & Ruona, W. E. A. (2000),
“Development of a generalized learning transfer system
inventory”, Human Resource Development Quarterly, 11, 333-
360.
Holton, E. F., III. (1996), “The flawed four-level evaluation model”,
Human Resource Development Quarterly, 7, 5-21.
Holton, E. F., III. (2003), What’s really wrong: Diagnosis for learning
transfer system change. In E. Salas et al. (Eds.), Improving
learning transfer in organizations (pp. 59-79). San Francisco,
CA: Jossey-Bass.

61
Hughes J (2000), The Philosophy of Social Research. Essex,
Longman
Kirkpatrick, D. (1996), “Great ideas revisited”, Training and
Development, 50, 1, pp.54-60.
Kirkpatrick, D. L. (1976), “Evaluation of training”. In R. L. Craig (Ed.),
Training and Development Handbook (2nd ed.). New York:
McGraw-Hill.
Kraiger, K. (2002). Decision-based evaluation. In K. Kraiger (Ed.),
Creating, implementing, and managing effective training and
development (pp. 331-375). San Francisco, CA: Jossey-Bass.
Krueger, N., and Carsrud, A. (1993), “Entrepreneurial intentions:
applying the theory of planned behavior”, Entrepreneurship
and Regional Development, Vol 18, No 1, pp 5-21.
Lynton, R. and Pareek, U. (2000), Training for Organizational
Transformation – For Policy Makers and Change Managers,
Sage Publications , London.
Marchington, M. and Wilkinson, A. (2000), Core Personnel and
Development, CIPD, London.
Marshall, C., and Rossman, G.B. (2002), Designing Qualitative
Research (3rd ed.), Thousand Oaks, CA: Sage
Nickols, F. W. (2005), Advances in Developing Human Resources, 7,
121-134.
Patton, M. Q. (2002), Qualitative evaluation and research methods
(3rd ed.). Sage Publications, Inc., Thousand Oaks, CA

Phillips , J. J. (2003), Handbook of Training Evaluation and


Measurement Methods, Elsevier Publishers, UK.
Pinder, C.C., and Moore, L.F. (1979), “The resurrection of taxonomy
to aid the development of middle range theories of
organizational behavior”, Administrative Science Quarterly,
Vol 24, pp 99–118.
Priest, S. (2001), “A program evaluation primer”, Journal of
Experiential Education, 24, 1, pp.34-40.

62
Punch, K.F. (2003), Survey Research: The Basics, Sage Publications
, London

Rae, L. (1999), Using evaluation in training and development, Kogan


Page Ltd, London.
Reinhardt, R. (2001), In Knowledge Management: Classic and
Contemporary Works(Eds, Morey, D., Maybury, M.,
Thuraisingham, B. and Thuraisingham, S.) MIT Press, pp. 187-
222.
Robson, C. (2000) , Real World Research. Blackwell Publishers,
Oxford

Salas, E., & Cannon-Bowers, J. A. (2001), “The science of training: A


decade of progress”, Annual Review of Psychology, 52, 471-
499.
Sekaran, U. (2003), Research Methods for Business: A Skill Building
Approach, John and Wiley Inc., USA

Smith MJ (2001), Social Science in Question. London, Sage


Stoel, D. (2004), “The evaluation heavy weight match”, Training &
Development, 58, pp.46–48.
Stufflebeam, D. L. (2001), Evaluation models, Jossey-Bass
Publishers, San Francisco.
Tamkin, P. (2005), Institute for Employment Studies, London, pp. 1-
80.
Tannenbaum, S. I., Cannon-Bowers, J. A., Salas, E., & Mathieu, J. E.
(1993), Factors that influence training effectiveness: A
conceptual model and longitudinal analysis (Technical Rep.
No. 93-011). Orlando, FL: Naval Training Systems Center.
Tannenbaum, S., Cannon-Bowers, J.A., Salas, E., and Mathieu, J.E.
(1993), “Factors that influence training effectiveness: a
conceptual model and longitudinal analysis”, Technical Report
No 93–011), Naval Training Systems Centre, Orlando, FL.

63
Torres, R. T.,&Preskill, H. (2001), “Evaluation and organizational
learning: Past, present, and future”, American Journal of
Evaluation, 22, 387-395.
Torres, R.T., Preskill H, and Piontek, M. (1996). Evaluation strategies
for communicating and reporting: Enhancing learning in
organizations, Sage Publications, Thousand Oaks, CA.
Wang, G. G. and Wang, J. (2005), Advances in Developing Human
Resources, 7, 22-36.
Warr, P., Allan, C., & Birdi, K. (1999), “Predicting three levels of
training outcome”, Journal of Occupational and Organizational
Psychology, 72(3), 351-375.

Appendix
Questionnaire

1. For what principal purpose training evaluation is needed for


your firm?

Training resultsSkill developmentBusiness


goalsImplementation control

2. To what extent do you agree that training evaluation of your


firm should be centrally focused towards measuring changes
in knowledge and appropriate knowledge transfer?

Strongly agreeAgreeDisagreeStrongly disagree

3. Training evaluation of your firm should be measured through


which performance criteria?

64
Qualitative performanceQuantitative performance

4. Which is the major challenge of training evaluation for your


firm?

Lack of accountabilityCultural resistance

5. To what extent do you agree that training evaluation of your


firm should be methodological approach of measuring learning
outcomes?

Strongly agreeAgreeDisagreeStrongly disagree

6. Which dimension of training evaluation target area needs to


be centrally focused by your firm?

Learning content and designChanges in


learnersOrganisational payoff

7. How effective is Kirkpatrick Model of training evaluation for


your firm?

Highly effectiveReasonably effectiveIneffective

65
8. Which dimension of Kirkpatrick Model training evaluation
requires to be most focused by your firm in order to get the
desired results in relation to training evaluation?

LearningReactionsBehaviourResults

9. How effective is Kirkpatrick model training evaluation in


evaluating e-learning of your firm?

Highly effectiveEffectiveIneffective

10. Do you find that Kirkpatrick model training evaluation is cost

effective and efficient in controlling staff turnover?

Yes

No

66

Vous aimerez peut-être aussi