Vous êtes sur la page 1sur 20

explain the importance and contribution of evaluation to strategic human resource

management;
identify the range of different purposes an evaluation can serve;
assess the barriers to evaluation and their causes;
identify the various stakeholders in any evaluation and their need both to contribute and to
receive feedback;
assess the choices to be made in respect of the evaluation process and make suitably informed
decisions;
outline a range of strategies and data collection techniques involving both primary and
secondary data, which may be used to evaluate strategic human resource management;
identify the complexity of issues associated with feeding back the findings of evaluations.

Introduction
Within all of our lives we are constantly evaluating as part of our normal daily
activities. As this book was being researched the value or worth of many academics work
in the field of strategic human resource management (SHRM) was assessed. As this book is
being read its contents will be evaluated, the style in which it is written, the design and
pedagogic features and the ideas and issues that are raised. The relevance of the contents
will probably be evaluated to the readers context, her or his ability to understand the
material and the suitability of the place and time in which it is being read. Such personal
or informal evaluations of situations and experiences affect ones perception of the world
and underpin the choices that are made.
Given that this process is so central to what people do, it is perhaps surprising that
evaluation, as a planned and formal activity, appears to be so problematic in an
organisational context. Effective evaluation and the promotion of HR strategies requires the
systematic collection of data that are subsequently analysed, the findings being presented in
a meaningful and useful form. These data may have been collected through monitoring what
is happening within an organisation over time, perhaps as part of a balanced scorecard
approach to performance management or an HR information system, or specifically to
evaluate a particular strategic HR intervention. However, without the ensuing evaluation,
the effectiveness of one or a series of strategic HR interventions may be unclear. The
knowledge and understanding gained through this process of research, the authors would
argue, enables organisations to have a clearer understanding of the impact of different
strategies and, of equal importance, to adjust their HR interventions to help promote these
strategies. Yet despite this, it is widely recognised that the implementation of HR strategies
has rarely included a planned evaluation (Doyle et al., 2000) and, on those occasions that
they do, the findings are rarely utilised at all, let alone strategically.
Toracco (1997) argues that this lack of evaluation is due to difficulties associated with
the long timeframes required for strategic change in organisations. In particular, it is often
difficult to be certain of the precise impact of specific interventions. These observations are
supported by others (for example Randell, 1994; Skinner, 2004) who emphasise the
difficulty of designing evaluation studies and obtaining data of sufficient quality to
disentangle the effect of a specific HR intervention from other stimuli. Even in the context
of training and development, where there is a great deal of literature written about the need
to evaluate and a widespread acceptance among practitioners of its importance, the reality is
that evaluation rarely progresses beyond the end of course happy sheet, asking participants
about the operational practicalities of the training experience. Despite such problems, the
need for evaluation is emphasised by models of strategy formulation and development.
These usually incorporate an information gathering and analysis stage, which emphasises
the importance of knowing and understanding an organisations current situation as part of
developing strategy. For example, as was seen in Chapter 1, top-down development of
strategy involves a number of steps, including analysing an organisations environment and
its internal resources, within which it is recognised that this information is likely to be
incomplete and therefore imperfect. Similarly, the processual approach to strategy
development involves reflection, evaluation and understanding.
Such research is likely to include data on the external environment as well as internal
objectives, organisational (including HR) capabilities and the need to communicate within
the organisation (Prahalad and Hamel, 1990).
Typical scientific approaches to research often seek to minimise the amount of
involvement between those collecting and analysing the data (the researchers) and those
from whom data are collected on the grounds of maintaining objectivity (Robson, 2002).
Within such a scientific evaluation, evaluators are seen as separate from, rather than
working alongside, an organisation. Findings are disseminated only to the sponsor of the
evaluation rather than all those affected. This conflicts with much that we discuss in this
book in terms of the importance of an employee involvement to develop strategy ownership
and understanding (Chapter 12). Such conflict is, perhaps, not too surprising if the purpose
attributed to the evaluation is to understand and explain. We would argue that evaluation can
also provide useful insights about HR issues associated with specific strategies. However, in
evaluating and promoting HR strategies, we believe it is often necessary for the person
undertaking the evaluation to be within or to become part of the organisation. Analysis of
data collected should not take place in a vacuum and judgements need to be made within the
context of the organisation.
In this chapter we therefore argue that the evaluation of HR strategies needs to
involve those affected within the organisation as fully as possible. This is not to say that
evaluation can only be undertaken by people within the organisation. Rather it implies that
where people external to the organisation are used, their role should be to help those within
to perceive, understand and act to improve the situation; an approach akin to Scheins
(1999) process consultation. As part of this it is recognised that, depending upon the
purpose of the evaluation, one or a number of research strategies might be more appropriate.
Evaluation may take place over a range of time horizons. These can range from one-off case
studies, perhaps answering the question Where are we now?, through cross-sectional
studies which benchmark HR practices, to longitudinal evaluations perhaps using a series of
employee attitude surveys. Similarly, we recognise that to address particular strategic
objectives some data collection techniques are likely to collect more appropriate data than
others. For example, a questionnaire survey of employees is less likely to discover their in-
depth feelings about a recent downsizing than face-to-face interviews in which an
interviewer takes time to gain the employees confidence.
This chapter begins by considering the nature of evaluation in terms of what it is, its
benefits and why it may not be undertaken (Figure 4.1). The choices that have to be made
about the process once a decision has been made to evaluate are then explored.
Commencing with a discussion of purpose and context, typical and action research
approaches are considered and different evaluation strategies and data gathering techniques
outlined. The implications of different tools and techniques, and issues relating to the
feeding back of findings, are then discussed. However, we would stress that while this
chapter provides an overview of a range of evaluation strategies and techniques, there is still
a need to read far more widely about these prior to undertaking an evaluation yourself. This
is because the space available is insufficient to enable us to explain the strategies and
techniques in sufficient detail.

The nature of evaluation

If one were to ask a group of people to define evaluation, it would probably be found
that the term meant different things to individuals depending on their background and
experience.As the thinking and practice relating to evaluation have evolved over the last 40
years, a variety of strategies and data collection techniques has developed and there has been
a range of definitions to match. Patton (1997: 192), for example, identifies 57 types of
evaluation as illustrativeof the many types available! Key issues in differentiating between
evaluation types include:

when they occur;


their purpose;
what is being evaluated;
the evaluation strategy adopted;
who undertakes the evaluation.
These issues have been used to differentiate between some examples of the evaluation
types that Patton identifies in Table 4.1.
This process of planned or formal evaluation, as distinct from the personal, informal
evaluation such as gut feeling, has variously been defined as: an activity for providing
information for decision-making (Alkin, 1969), an activity focused on assessing the
achievement of objectives (Eisner, 1979; Guba and Lincoln, 1981) or an activity focused on
assessing actual effects and outcomes regardless of stated goals (Scriven, 1972).
Although different definitions take slightly different views, they all emphasise the
need for evaluation to be conducted as a planned and purposeful activity rather than as an
afterthought (In Practice 4.1). As such, evaluation is concerned with finding things out in a
systematic way. The term systematic emphasises that evaluation will be based on logical
relationships and not just beliefs or hunches often associated with informal evaluations. The
term finding out highlights the importance of data as the basis upon which decisions are
made as opposed to a reliance upon assumptions and guesswork. Patton (2002) highlights the
contribution of evaluation to improvement, describing evaluation as any effort to increase
human effectiveness through systematic data-based inquiry.Drawing these ideas
together,Russ-Eft and Preskill (2001) argue that evaluation should be a systematic process for
enhancing knowledge and decision-making which involves the collection of data.

The need to evaluate


Evaluation and SHRM

Inevitably, given the turbulent business environment of the late twentieth and early
twenty-first centuries, organisations have to contend with a world in which the only constant
is change (Carnall, 1995). In this context, an important aspect of SHRM is to ensure that the
organisation is able to respond in a timely and positive manner to its internal and external
environments. As early as 1987, Guest identified a key role for HR managers within the
global scenario of organisations faced with increasing external and internal pressures, when
he asserted that the capacity to implement strategic plans was an important feature of
successful SHRM. Since then, the literature has frequently argued for the strategic role of the
HR function and authors such as Purcell (1999), Tyson (1999) and Ulrich (1998) have
emphasised the need to manage human resources strategically in order that an organisations
capacity for change can be improved. Tyson (1999) maintains that HR managers are major
players in the creation of organisational capability. He also suggests that evaluation is an area
where the HR function can make a significant contribution to increase understanding of the
appropriateness of interventions,in effect to learn from experience,and to change strategy as a
consequence.

The benefits of evaluation

As discussed in Chapter 10, an integral part of effective learning is the reflection on


experience the evaluation of process and outcomes that enables informed progression to the
next stage.Writing on organisational change, Doyle et al. (2000) ask the question, if this is not
monitored how can the experience contribute to organisational learning? Pedler et al. (1991)
include the conscious structuring of evaluation as a characteristic of learning organisations
and the role of evaluation in successful change initiatives is widely acknowledged within the
change management literature. Similarly, Patrickson et al. (1995: 6) argue that evaluation is a
necessary precursor to more change in a cycle of continuous improvement, a pivotal point
that provides an opportunity for analysis and reflection before making adjustments to the
course ofchange.Nelson (2003) also asserts that the management of any change should
incorporate the regular review of progress and that strategy should change in response to
feedback.

Other authors specifically identify important contributions that the inclusion ofa
planned process of evaluation can make to successful SHRM. Love (1991), for example,
outlines the role of effective evaluation in improving management decision-making through
the provision of information and the development of shared understanding. Kirkpatrick
(1985) argues the importance of feedback in gaining acceptance and commitment to
organisational initiatives, while Carnall (1995) suggests that people need information to
understand new systems and their place in them. Preskill and Torres (1999) argue that
evaluative inquiry helps organisation members reduce uncertainty, clarify direction, build
community, and ensure that learning is part of everyones job. The sharing of information,
they argue, is essential if new insights and mutual understanding are to be created.

Many of these benefits relate to the common themes of information gathering and
developing shared understanding. At the start of this chapter, the reality that every individual
evaluates on a personal basis and how the same is true in relation to our individual
organisational experiences was highlighted. Reichers et al. (1997) argue that people need to
understand not only the reasons for change but also its ongoing progress and its results.
Individuals at all levels will make their own assessments, constructing their own reality
relating to the necessity for, and the effectiveness of, new initiatives and strategies, often even
when these do not affect them directly. In some cases, these views are shared and tested with
colleagues but, despite this, much may remain tacit rather than explicit.Yet individualsfuture
actions will almost certainly be influenced by these assessments,even when they are based
solely on personal perceptions and the subjective evaluations that result from relatively
narrow perspectives.

The extent of individual understanding is inevitably determined by the information


that is available, whether through formal or informal channels. The conclusions that are
reached will be affected by the quality of that information; in particular its relevance,
accuracy, comprehensiveness and up-to-dateness (Calder, 1994). Patton (1997) maintains that
an evaluation process is, in itself, a benefit, due to the learning that occurs among those
involved in it. This, he argues, is because evaluation both depends on, and facilitates, clear
communication. Every strategic HR intervention is unique and can only be understood from
the experience of the participants but this needs to happen within a more general analytical
framework. A planned process of evaluation can provide a mechanism for capturing the
individual learning which has occurred and for the sharing of this learning across the
organisation. This helps ensure that valuable knowledge will not be lost (Anderson and
Boocock, 2002) and that there will be a sense of closure to the experiential learning cycle
(Hendry, 1996).Without an evaluation process to capture and share the learning, the evidence
is that valuable knowledge will escape, and it is highly likely that both individuals and
organisations will repeat the, often unsuccessful, past (Garvin, 1993) increasing the
likelihood of repeated mistakes (Gustafson et al., 2003; Key Concepts 4.1).

Reasons not to evaluate

From reading and thinking about the arguments put forward in the previous subsection and
answering Self-Check Question 4.2, you will have identified a range of reasons to justify the
evaluation of SHRM. If asked, most mangers would probably come up with a similar list. Yet,
despite the benefits that evaluation can provide, the reality for many organisations is that
evaluation simply does not happen. Reasons given for this can be grouped into three
overlapping categories:

the difficulties of undertaking evaluation;


the perceived lack of a need to evaluate;
the difficulties associated with dealing with negative outcomes.

Difficulties of undertaking evaluation

Difficulties of undertaking evaluation in relation to HRM strategies have long been


considered greater than for other business functions. Unlike the finance or production aspects
of an organisation,the contribution of HR has been widely considered to be virtually
unmeasurable because it deals with the soft, people side of the business. In the past it was
not considered possible for HR to be fully accountable in the same terms as other functional
areas, as its performance could not easily be measured or quantified in financial terms or
business metrics (Key Concept 4.2). This belief has not served the cause of the HR function
well, making it difficult for HR managers to demonstrate the value of the functions
contribution to the business and to compete for resources. It is, however,a position that is
changing,as there has been increasing recognition of the need to assess the contribution of
strategies to manage human resources (often referred to as human capital management) to the
bottom line. This highlights that, while there is no consensus on a set of universally relevant
indicators,there is growing agreement that the performance of human resources is linked to
practice in areas such as recruitment, training and development, remuneration and job design.
These, it is argued, need to be measured and reported combining hard (quantifiable) data with
narrative (Kingsmill, 2003). In the UK this has been reinforced by the move to introduce
regulations requiring human capital management reporting to become a statutory requirement
for listed companies as part of their operating financial review (Department for Trade and
Industry,2004; Key Concept 4.2).

Linked to the perception that some things are impossible to measure is the, often
unfounded, assumption that an organisations members do not possess the necessary skills to
produce a credible or competent evaluation. This may lead to a belief that to acquire the
necessary skills is likely to be both time consuming and costly or that an evaluation must
involve the use of expensive external consultants.

The perceived lack of a need to evaluate

The perceived lack of a need to evaluate is often characterised by the phrase: we


know the impact will be positive. This is especially the case for interventions that are
fashionable. Not surprisingly, writers such as Asch and Salaman (2002) caution that
fashionable ideas may not always be good. An intervention which fails to identify, or takes a
simplistic view of, the origins and nature of HR difficulties, may simply replace one set of
problems with another. Despite this there is often little or no evidence of a detailed or
considered assessment of either the organisational need or the appropriateness of particular
HR strategies before they are introduced. Given that the academic and practitioner literature
promote the beneficial effects ofHR initiatives and the positive experiences of others, it is not
surprising that senior managers responsible for initiating the process often begin from the
premise that there is an inherent value in the initiative in question (Brunnson and Olsen,1998)
and that benefits will therefore inevitably result from their implementation of a new or
revised strategy.

In situations where managers are under increasing pressure, quick fixes may be
attractive (Swanson, 1997) and it may appear easier to imitate rather than innovate (Brunsson
and Olsen,1998). This unquestioned belief also serves to reduce the perceived need to
evaluate formally as those responsible already knowthe effect will be positive (Skinner,
2004). The power and impact of such assessments is recognised widely in the literature.
Writing about the measurement of business excellence, Kanji (2002) observes that gut
feelings rather than fact and measurement are the basis of too many management decisions.
Such informal evaluations are made on the basis of unverified information, experience,
instinct or the opinion of the most influential people rather than on information extracted
correctly from reliable data (Conti,1997).Easterby-Smith (1994) notes also a preference of
managers, particularly at senior levels, for information received via their own informal
information channels and observes that this information tends to be far more influential than
that produced via more formal channels. Clark and Salamans (1998) ideology of
management reinforces the belief of managers in the value of their own judgements, making
them unlikely to question their own interpretations or to acknowledge the limits of their own
understanding. These factors serve not only to undermine a perceived need for a planned
evaluation but may also mean that objectives and expected outcomes of SHRM initiatives are
unclear and unarticulated.

Compounding this lack of a perceived need to evaluate formally among senior


management is the focus of senior management on the initiation rather than implementation
stages (Skinner and Mabey, 1997) of strategic human resource initiatives.This may also result
in a failure to define success criteria or to assign responsibility for monitoring progress. Russ-
Eft and Preskill (2001) note that, in their experience, the number one reason people give for
not undertaking an evaluation is that no-one requires it. Managers at all levels of
organisations struggle constantly with the pressure to succeed and the pressure of time
(Swanson, 1997). Consequently, time for evaluation may appear an indulgence. Managers
have limited resources available and, in the interests of their own security, satisfaction and
longer-term goals are likely to use their resources in pursuit of outcomes and activities which
they perceive to be valued by those in a position to reward success. They are therefore
unlikely to undertake activities that they consider are not seen as priorities in the minds of
their superiors and for which they have not been given specific responsibility or resources.

Allied to the discussion of the lack of a perceived need to evaluate is the implicit
recognition that,for many,evaluation is an afterthought.You will probably have noticed that,
in many of your HRM textbooks, discussion of evaluation occurs rarely and, if it does, tends
to appear towards the end. The implication is that evaluation of HR strategies and associated
interventions is something that is only thought about after the event has happened and is not
central to the implementation process.However,the end of the implementation is the point at
which those who have been involved are likely to be looking towards the next project and
evaluation of what is perceived as a past event is not high on their personal agenda. In
addition, if an evaluation process is not included in implementation plans from the beginning
it is unlikely that thought will have been given to the systems, processes and resources that
will be needed to collect information that will enable the evaluation to take place.

The difficulties associated with dealing with negative outcomes

The difficulties associated with dealing with negative outcomes provide the final
category of reasons why evaluation of SHRM initiatives may not happen. For many HR
managers, their previous experience of evaluation has been negative and divisive rather than
as a positive process of improvement and shared learning.This is largely due to the blame
culture which exists in a wide range of companies (In Practice 4.3). Bloomfield (2003)
characterises such cultures as ones in which it is sensible to keep your head down,cover your
back,do your best to hide mistakes,or,at the very least,ensure there is always someone else to
share the blame. Not surprisingly, the expectation is that any planned, explicit evaluation will
inevitably focus upon accountability and this will inexorably lead to criticism and the
apportioning ofresponsibility for failure.As Tyson (1999) notes,managers are more than
passive bystanders when it comes to the importa tion of new ideas, often selecting,
reinterpreting and giving relative emphasis to ideas according to their own agendas. For these
reasons, they become the obvious targets in a blame culture.Inevitably on this basis,defensive
reasoning and routines at both an individual and an organisational level (Argyris, 1994) are
unlikely to encourage the pursuit of planned, explicit, evaluation of strategic human resource
management. HR managers are likely to be aware of the risks involved in terms both of
personal criticism and of the activity being unpopular with peers and others who may also
feel vulnerable.

Aspects of SHRM that need to be evaluated

The question that most probably arises at this stage is:Which aspects of SHRM
should be evaluated? It would, perhaps, appear unconvincing to answer all of them.
However, if an HR intervention is of strategic importance to an organisation, then the
organisation needs to evaluate its impact. For some aspects of SHRM, such as recruitment
and selection (Chapter 8) and human resource development (Chapter 10), data may already
be collected and held on the organisations human resource information system (In Practice
4.4). In such instances,the key issues will be ensuring that:

the data held is both up to date and relevant;


routine monitoring using these data are undertaken;
the findings from the routine monitoring are acted upon.
For other aspects, such as the introduction of a new performance management system
(Chapter 9) or the impact of an organisational downsizing on those who remain employed
(Chapter 14), data are likely to need to be collected specifically to evaluate parts of that
strategy. Similarly, for specific training interventions (Chapter 10), it is likely to be necessary
to design evaluation strategies that measure the impact of the intervention on employees
performance, rather than whether or not the participants felt the trainer helped them to learn.

Undertaking the evaluation

The purpose of evaluation Business research textbooks, for example Saunders et al.
(2007), often place research projects on a continuum according to their purpose and context
(Figure 4.2).At one end of this continuum are evaluations undertaken to advance knowledge
and theoretical understanding of processes and outcomes including SHRM.This basic
research is therefore of a fundamental rather than applied nature, the questions being set and
solved by academics with very little, if any, focus on use of research findings by HR
managers. At the other end are evaluations that are of direct and immediate relevance to
organisations, address issues which they consider to be important and present findings in
ways which can be understood easily and acted upon. This is usually termed applied research
and is governed by the world of practice. The evaluation of SHRM is, not surprisingly, placed
towards the applied end of this continuum. Such research is oriented clearly towards
examining practical problems associated with strategic HR interventions or making strategic
decisions about particular courses of action for managing human resources within
organisations. It therefore includes monitoring of operational aspects of HRM such as
absence (Chapter 9), turnover and recruitment (Chapter 8), thereby helping organisations to
establish what is happening and assess effectiveness of particular HR interventions.

Evaluation should take place as an integrated part of the ongoing monitoring of


existing HR policies or procedures as well as during the introduction and implementation of
new HR policies or procedures. Evaluation where the focus is on improving and fine tuning is
often referred to as formative evaluation (Table 4.1). In such cases data collection is often
undertaken as part of regular ongoing measurement of an organisations performance. For
example, where a balanced scorecard approach (Kaplan and Norton, 1992;Huselid et
al.,2001) is adopted,data are collected on those measures that are considered to be most
critical to that organisations vision for success (Key Concepts 4.3).This is likely to include
HR strategies such as learning and development and its ability to enhance the organisations
future leadership capabilities,or the extent to which the management of employee
performance enables revenue growth. That which occurs towards the latter stages, perhaps to
assess impact, determine the extent to which goals have been met or establish whether to
continue with the intervention is termed summative evaluation. For such evaluations,data
collection is likely to be less frequent or regular.

Within any evaluation,it is important to establish the purpose of both the evaluation
and the strategic HR intervention or process that is being evaluated. Easterby-Smith (1994)
defines four possible purposes of an evaluation but cautions that it is unrealistic to expect any
evaluation to serve more than one.These are:

improving where the need is to identify what should happen next;


controlling is about monitoring for quality and efficiency;
proving involves measuring and making judgements about worth and impact;
learning in which the process of evaluation itself has a positive impact on the learning
experience.

It would, however, be dangerous to assume that the identification of purpose will be


straightforward. Many strategic HR initiatives lack either clear objectives or stated success
criteria. This may make it difficult to identify what should be evaluated and how. Often
evaluations need to include a variety of stakeholders,not all of whom may wish to pursue the
same purpose.In addition,individuals and groups may not be able or willing to share their true
intentions or expectations with each other or those who are undertaking the evaluation.

The context of evaluation

Not surprisingly, given the discussions in the previous paragraph, it is also important
to identify who the stakeholders are in relation to the HR initiative being evaluated and the
evaluation itself.Any changes resulting from the evaluation are likely to affect a range of
individuals and groups who are likely to be both internal and external to the organisation.
These stakeholders are likely to differ in the extent of their interest, their priorities and their
level of influence for, as Easterby-Smith (1994) observes, any evaluation process is a
complex one which cannot be divorced from issues of power, politics and value
judgements.Organisational politics can affect any stage ofan evaluation (Russ-Eft and
Preskill, 2001) from influencing the decision to evaluate through to the way in which the
findings are used. In any evaluation there is likely to be a commissioning or dominant
stakeholder or stakeholder group (often referred to as principal client or sponsor) and in an
organisational setting this is often a management group. However, the nature of HRM
strategies and initiatives is such that in each case there will be others who have both a
legitimate interest and a stake in the findings of any evaluation process. Not least,those
managers and employees whose participation in the evaluation is necessary and who, through
sharing their views and experience, might feel that they were owed access to the findings they
had helped create as well as those who will use the evaluation findings.

Identifying the intended audience for the findings is important in terms of


understanding expectation, defining purpose, involving those who need to participate and in
providing feedback in an appropriate way. Although not a barrier to evaluation in itself the
way in which the findings are used inevitably determines the effect, if any, that an evaluation
has. Patton (1997) goes so far as to suggest that evaluations ought to be judged on their actual
use, for as we will discuss in more detail in Analysing and feeding back, later in this chapter,
non-utilisation of findings is a commonly identified and widely bemoaned problem. In this
context, it is only realistic to recognise that it is the values of the commissioner and intended
users, those who have the responsibility to apply evaluation findings and implement
recommendations that need to frame the evaluation. This need not, however, preclude
responding to the needs of other stakeholders. Evaluation can serve an important function in
facilitating bottom-up feedback,ensuring that the experience of those on the receiving end of
strategies is captured and shared outside their immediate peer group and used to refine the
strategy.

Evaluation can take place at a number of different levels based upon its focus.
Although the labels used differ, evaluation models in effect distinguish between levels that
range from an operational level measuring reactions through to a more strategic focus on
organisational performance. Developed in 1959 in relation to training
evaluation,Kirkpatricks model is still the most widely recognised (Phillips,1991;Russ-Eft
and Preskill, 2001). This highlights that evaluation can be undertaken to assess operational
interventions (level 1 and occasionally 2), to support medium-term or tactical interventions
(level 3 and occasionally level 2) or it can focus on more strategic interventions (level 4). As
can be seen from Table 4.2, Kirkpatricks first two levels focus on the effectiveness of the
intervention as judged by the recipients. They relate primarily to the operational level and
while they may affect the design ofthe HR intervention,such as a training event, evaluation at
these two levels is unlikely to have any strategic impact. In contrast, evaluation at levels 3
and 4 have an increasingly strategic impact. These consider the effect of the process or policy
being evaluated on the wider organisation, the achievement of its goals and ultimately the
implications for organisational performance. Subsequent researchers, for example Hamblin
(1974), have added a further strategic level of evaluation (5) which focuses on the wider
contribution the organisation is now able to make. However, despite these observations, it is
worth noting that Kirkpatrick did not describe his model originally as hierarchical or suggest
that one level would impact upon the next.

Clarity of purpose and evaluation strategies

In terms of approach, as with any evaluation, decisions have to be made regarding the
techniques to be used to collect data. These should reflect the choices that have been made in
relation to the topics covered in the preceding section. In terms of evaluation, the use of
sound method and reliable data are critical (Stern, 2004). The decision about purpose will
also determine whether the evaluation needs to be formative or summative in nature.A
formative evaluation takes place during the implementation with the intention of feeding back
into the process and improving both the process and the outcomes of the initiative where
appropriate.A summative evaluation usually occurs at the end of the implementation and is
about determining the worth or value of what has been done, whether success criteria were
met, if the results justified the cost. It would therefore be possible to undertake both a
formative and summative evaluation ofthe same initiative.

The importance of a clear purpose

Probably the most difficult aspect of any evaluation is coming to a clear


understanding of what is being evaluated and why; in other words the precise purpose and
objectives (Saunders et al., 2007). However, this issue is often bypassed within the evaluation
process. For example, typical corporate measures of the success of a redundancy programme
are often related to profit, production levels, return on investment and perhaps customer
satisfaction (Chapter 14). A numerical rise in such measures may be interpreted as the
programme being successful and, perhaps, having a positive impact on employee
commitment. However, these numbers do not actually measure employees commitment to
the organisation or any link between redundancies and commitment. Similarly, training
courses are often evaluated in terms of the trainees enjoyment and thoughts on the perceived
usefulness of the intervention rather than the impact upon their observed behaviour in the
work environment (Chapter 10). Simply enjoying a training intervention does not prove that
it is effective, unless producing enjoyment is one of the aims (Rushmer,1997).

One way of helping ensure clarity of purpose and objectives is to spend time
establishing and agreeing these with the sponsor of the evaluation. This is unlikely to be as
easy as it might seem and will be time consuming. As part of the process, it can be argued
that it is essential to ensure that both the person undertaking the evaluation and the sponsor
have the same understanding. Another, and equally important, aspect of ensuring clarity of
purpose relates to the understanding and insight the person undertaking the evaluation brings.
While her or his previous experience is likely to be important, this understanding is also
likely to be drawn from reading about others experiences in similar situations; a process
more often referred to as reviewing the literature. Indeed, your reading of this book is based
upon the assumption that you will be able to apply some of the theories, conceptual
frameworks and ideas written about in this book to your strategic management of HR.

Evaluation strategies

Once a clear purpose for evaluation has been established,a variety ofevaluation
strategies may be adopted. Typically, evaluations are concerned with finding out the extent to
which the objectives of any given action, activity or process, such as the introduction of a
new training intervention,has been achieved. In other words,it is concerned with testing the
value or impact of the action,activity or process,usually with the view to making some form
of recommendation for change (Clarke,1999).As part of the evaluation,it is necessary to
gather data about what is happening or has happened and analyse them. This can be
undertaken either as a snapshot or longitudinally using a variety of data collection techniques
such as interrogating existing HR databases, interview, questionnaire and observation.
Findings based upon the analysis of these data are subsequently disseminated back to the
sponsor whose responsibility it is to take action (In Practice 4.5). Consequently,there is no
specific requirement upon those involved in the research to take action (Figure 4.3).This is in
contrast to action research which will be discussed later.

Saunders et al. (2007) emphasise that, when making choices, what matters is not the
label attached to a particular strategy,but whether the strategy is appropriate to the purpose
and objectives. In particular, the use of a sound evaluation strategy and the collection of
reliable data are critical (Stern, 2004). Four evaluation strategies tend to be used in the
evaluation of SHRM: survey; case study; experiment; existing (secondary) data.

Each strategy should not be thought of as mutually exclusive, for example a case
study in an organisation may well involve using a survey to collect data and combine these
data with existing (secondary) data from the organisations HR information system. Similarly,
an experimental design, such as testing the relative impact of a number of different HR
interventions, will often be undertaken using a number of different case studies. In addition,
these strategies can be applied either longitudinally or cross-sectionally. The main strength of
a longitudinal perspective is the ability it offers to evaluate the impact of SHRM interventions
over time. By gathering data over time, some indication of the impact of interventions upon
those variables that are likely to affect the change can be obtained (Adams and Schvaneveldt,
1991). In contrast a crosssectional perspective seeks to describe the incidence of a particular
phenomenon or particular phenomena, such as the information technology skills possessed by
managers and their attitude to training,at one particular time.

Using surveys to evaluate SHRM

Surveys are perhaps the most popular strategy for obtaining data to evaluate SHRM
interventions.Using this strategy,a large amount of data can be collected from a sizeable
population in an economic way (Saunders et al., 2007). This strategy is often based around a
questionnaire.Questionnaires enable standardised data to be collected,thereby allowing easy
comparison. They are also relatively easily understood and perceived as authoritative by most
employees.However,the questionnaire is not the only data collection technique that can be
used within a survey strategy. Structured observations, such as those frequently associated
with organisation and methods (O&M) evaluations and structured interviews involving
standardised questions can also be used.

Survey questions can be put to both individual employees and groups of employees.
Where groups are interviewed, their selection will need to be thought about carefully. We
would advocate taking a horizontal slice through the organisation to select each group. By
doing this, each member of an interview group is likely to have similar status. In contrast,
using a vertical slice would introduce perceptions about status differences within each group
(Saunders et al.,2007).

Using case studies to evaluate SHRM

Robson (2002: 178) defines case study as a strategy for doing research which
involves an empirical investigation of a particular contemporary phenomenon within its real
life context using multiple sources of evidence. This strategy is widely used when
researching the impact of HR interventions within an organisation or part of an organisation
such as a division. The data collection techniques used can be various including interviews,
observation, analysis of existing documents and, like the survey strategy, questionnaires.
However, this is not to negate the importance of comparative work, benchmarking or setting
a case study in a wider organisational, industrial or national context. This might be achieved
by combining a case study strategy with the analysis of existing (secondary) data that have
already been collected for some other purpose.

Using experiment to evaluate SHRM

An experimental strategy owes much to research in the natural sciences, although it


also features strongly in the social sciences, in particular psychology (Saunders et al., 2007).
Typically, it will involve the introduction of a planned HR intervention, such as a new form
of bonus, to one or more groups during which as many of the other factors likely to influence
the groups are controlled. Comparison is then made between the groups and a control group
where the HR intervention has not been introduced. However, although the origins of
evaluation practice and theory lie in an experimental strategy,in many
organisations,experiments such as that outlined may be impracticable.

Using existing (secondary) data to evaluate SHRM

The increasing use of existing data as a strategy to evaluate SHRM has been
facilitated by the rapid growth in computerised personnel information systems over the past
decade. These relational databases store HR data in a series of tables. Each table can be
thought of as a drawer in a conventional filing cabinet containing the electronic equivalent of
filing cards. For example,one table (drawer) may contain data about individual employees.
Another table may contain data about jobs within the organisation, another about grades and
associated salaries, while another may contain data on responses to recruitment
advertisements. These tables are linked together electronically within the database by
common pieces of data such as an employees name or a job title.
Although these data have been collected for a specific purpose, they can also be used
for other purposes. Data collected as part of performance and development appraisals might
be combined with data on competences and recruitment and selection to support the
development of a talent management strategy.As part of this, reports would be produced
which match current employees profiles with future requirements due to likely retirements
within the organisation,thereby highlighting specific training needs.

External sources of secondary data tend to provide summaries rather than raw data.
Sources include quality daily newspapers, government departments surveys and published
official statistics covering social, demographic and economic topics. Publications from
research and trade organisations such as the Institute for Employment Studies at Sussex
University and Income Data Services Ltd, cover a wide range of human resource topics,such
as performance-related pay and relocation packages.

For certain SHRM evaluations, possible improvements will be sought by comparing


data collected about particular HR processes in an organisation with data already collected
from other organisations using one or more of numerous evaluation models available, such as
the European Foundation for Quality Managements (EFQM) European Excellence Model
(Chartered Management Institute, 2004; Key Concepts 4.4). Such process benchmarking is
concerned not only with the measure of performance, but also with the exploration of why
there are differences, how comparable the data are between different contexts and how
potential improvements may be transferred (Bendell et al.,1993).However,where limited
appropriate secondary data are available within the organisation, primary data will also need
to be collected specifically for the purpose (Key Concepts 4.3).

Action research: an alternative to the typical evaluation

An alternative way to approach evaluation is that of action research, within which


there is an inherent need for those involved to take some form of action. Action research
makes use of the same set of data collection techniques as used for other evaluation
approaches.Although it has been interpreted by management researchers in a variety of ways,
there are four common themes within the literature. The first focuses upon and emphasises
the purpose: research in action rather than research about action (Coghlan and Brannick,
2005) so that, for example, the evaluation is concerned with the resolution of organisational
issues,such as downsizing,by working with those who experience the issues directly.

The second emphasises the iterative nature of the process of diagnosing, planning,
taking action and evaluating (Figure 4.4). The action research spiral commences within a
specific context and with a clear purpose. This is likely to be expressed as an objective
(Robson, 2002). Diagnosis, sometimes referred to as fact finding and analysis, is undertaken
to enable action planning and a decision about the actions to be taken. These are then taken
and the actions evaluated (cycle 1).Subsequent cycles involve further diagnosis taking into
account previous evaluations, planning further actions, taking these actions and evaluating.

The third theme relates to the involvement of practitioners in the evaluation and in
particular a collaborative democratic partnership between practitioners and evaluators, be
they academics, other practitioners or internal or external consultants. Eden and Huxham
(1996: 75) argue that the findings of action research result from involvement with members
of an organization over a matter which is of genuine concern to them. Therefore, the
evaluator is part of the organisation within which the evaluation and change process are
taking place (Coghlan and Brannick, 2005) rather than more typical evaluation where,for
example,survivors of downsizing are subjects or objects of study. Finally, action research
should have implications beyond the immediate project; in other words it must be clear that
the results could inform other contexts. For academics undertaking action research the Eden
and Huxham (1996) link this to an explicit concern for the development of theory. However,
they emphasise that for both internal and external consultants this is more likely to focus
upon the subsequent transfer of knowledge gained from one specific context to another. Such
use of knowledge to inform other contexts, the authors believe, also applies to others,
undertaking action research such as practitioners.

Thus action research differs from typical evaluation due to: a focus upon change; the
recognition that it is an iterative process involving diagnosis, planning,action and evaluation;
the involvement of employees (practitioners) throughout the process.

Schein (1999) emphasises the importance of employee involvement throughout


evaluation processes, as employees are more likely to implement change they have helped to
create.Once employees have identified a need for changes in HR polices and procedures and
widely shared this need it becomes difficult to ignore and the pressure for change comes from
within the organisation. Action research therefore combines both information gathering and
facilitation of change. The diagnosis stage is often seen by employees as recognition of the
need to do something (Cunningham, 1995). Planning encourages people to meet and discuss
the most appropriate action steps and can help encourage group ownership by involving
people. These plans are then implemented. Once monitoring and evaluation have taken place
there is a responsibility to use the findings to revise the intervention as necessary.

Action research can have two distinct foci (Schein, 1999). The first of these, while
involving those being evaluated in the process, aims to fulfil those undertaking the
evaluations agenda rather than that of the sponsor. This does not, however, preclude the
sponsor from also benefiting from any changes brought about by the evaluation.The second
starts with the needs of the sponsor and involves those undertaking the evaluation in the
sponsors issues, rather than the sponsor in their issues. Such consultant activities are termed
process consultation by Schein (1999). The consultant, he argues, assists the client to
perceive, understand and act upon the process events that occur within their environment in
order to improve the situation as the client sees it. Within this definition, the term client
refers to the persons or person, often senior managers, who sponsor the evaluation. Using
Scheins analogy of a clinician and clinical enquiry, the consultant (evaluator) is involved by
the sponsor in the diagnosis (action research) which is driven by the sponsors needs. It
therefore follows that subsequent interventions are jointly owned by the consultant and the
sponsor, who is involved at all stages. The process consultant therefore helps the sponsor to
gain the skills of diagnosis and fixing organisational problems so that he or she can continue
to improve the organisation on his or her own.
Schein (1999) argues that the process consultation approach to action research is more
appropriate because it better fits the realities of organisational life and is more likely to reveal
important organisational dynamics.However,it is still dependent upon obtaining data which
meet the purpose of the change evaluation and are of sufficient quality to allow causal
conclusions to be drawn. Many authors (for example Randell, 1994), have emphasised the
difficulties associated with obtaining such data. For example,the data collected may well be
unreliable,such as where there are low response rates, or in the case of HR information
systems,incomplete or out of date.We now turn to the need to identifying appropriate
techniques for obtaining credible data whichever type pf evaluation is being undertaken.

Gathering primary data for analysis

The data gathering techniques used need to be related closely to the purpose of the
evaluation. For all change evaluations, it is important that data appear credible and actually
represent the situation. This issue is summarised by Raimond (1993: 55) as the How do I
know? test and can be addressed by paying careful attention to data gathering techniques.
This is especially important for longitudinal evaluation as techniques used and the questions
asked at the start of a change process may be inappropriate if transformational change has
occurred (Golembiewski et al., 1976). For cross-sectional evaluations or evaluations over
shorter time periods, the likelihood of techniques and questions no longer being appropriate is
far lower. For example, the impact of a management development programme might be
evaluated using data gathered by one or a number of different techniques dependent upon the
focus ofthe analysis.These might be crosssectional,such as in-course and post-course
questionnaires and observations by trainers and others, and/or longitudinal attitude surveys
and psychometric tests before and after the event.Secondary data regarding employee
performance,already gathered by existing appraisal systems,might also be used.

However, before considering primary data gathering techniques, it is important to note


that,in practice,evaluatorschoices are likely to be influenced by pragmatic considerations.
HR managers are often working within a very quantitatively oriented environment, where
numbers convey accuracy and a sense of precision. In addition, they may have little
experience or training in qualitative methods (Skinner et al., 2000). Consequently, the use of
qualitative data can be problematic as managers may not be comfortable with judgements
based on such evidence.

Using observation

Where evaluation is concerned with what people do, such as how they respond after a
particular training intervention, an obvious way to collect the data is to observe them. This is
essentially what observation involves: the systematic observation, recording description,
analysis and interpretation of peoples behaviour (Saunders et al., 2007). There are two main
types of observation: participant and structured. Participant observation is qualitative and
derives from the work of social anthropologists in the twentieth century.It has been used
widely to study changing social phenomena.As part of this,the evaluator attempts to become
fully involved in the lives and activities of those being evaluated and shares their experiences
not only by observing but also by feeling those experiences (Gill and Johnson,2002).
By contrast structured observation is quantitative and, as its name suggests, has a high
level of predetermined structure. It is concerned with the frequency of actions, such as in time
and motion studies, and tends to be concerned with fact finding. This may seem a long way
from the discussion of evaluating SHRM with which this chapter began.However,re-
examining the typical evaluation and action research processes (Figures 4.3 and 4.4)
emphasises that this is not the case. Both these processes require facts (data to be collected or
diagnosis to take place) before evaluation can occur and, in the case of action research,action
taken.

We would discourage you from thinking of one observational technique, or indeed


any single technique, as your sole method of collecting data to evaluate SHRM. In many
instances, the decision to undertake formal evaluation is based, at least partially, on informal
evaluation drawing upon observation in which the role of complete participant has been
adopted.

However, on its own, participant observation is unlikely to provide insufficient


evidence for an organisations senior management team.Consequently,it is often necessary to
supplement observation with other methods of data collection, such as interviews and
questionnaires, to triangulate (check) the findings. If findings based on data from different
sources all suggest the same conclusion, then you can be more certain that the data have
captured the reality of the situation rather than your findings being spurious.

Using interviews

Interviews are often described as purposeful conversations, the purpose being to


gather valid and reliable data. They may be unstructured and informal conversations or they
may be highly structured using standard questions for each respondent. In between these
extremes are intermediate positions, often referred to as semi-structured interviews.
Unstructured and semi-structured interviews are non-standardised. Unstructured interviews
are normally used for exploratory or in-depth evaluations and are, not surprisingly, also
referred to as in-depthinterviews. There is no predetermined list of questions, although the
person undertaking the interview needs to have a clear idea of those aspects of the change she
or he wishes to explore. These are often noted down prior to the interview as a checklist. The
interviewee is encouraged to talk freely about events,behaviours and beliefs in relation to the
changes and it is their perceptions which guide the interview (Easterby-Smith et al.,2002).

In semi-structured interviews,the interviewer will have a list of themes and questions


to be covered. In these interviews, questions may vary from interview to interview to reflect
those areas most appropriate to respondentsknowledge and understanding.This means that
some questions may be inappropriate to particular interviewees. Additional questions may
also be required in some semi-structured interviews to enable the objectives of the evaluation
to be explored more fully. The nature of the questions and the ensuing discussion means that
data from semi-structured interviews are usually recorded by note taking. However, as with
in-depth interviews, audio recording may be used, provided this does not have a negative
effect on interaction within the interview (Easterby-Smith et al.,2002).
Using questionnaires

Saunders et al. (2007) argue that structured interviews are in reality a form of
questionnaire. This is because, like other ways of administering questionnaires, the
respondent is asked to respond to the same set of questions in a predetermined order (de Vaus,
2001). Because each respondent is asked to respond to the same set ofquestions,a
questionnaire provides an efficient method of gathering data from a large sample prior to
analysis.

Responses to questionnaires are easier to record as they are based on a predetermined


and standardised set of questions. In structured interviews, there is face-to-face contact as the
interviewer reads out each question from an interview schedule and records the response,
usually on the same schedule.Answers are often pre-coded in the same way as those for
questionnaires. There is limited social interaction between the interviewer and the respondent,
such as when explanations are provided, and the questions need to be read out in the same
tone of voice so as not to indicate any bias.

Questionnaire data may also be collected over the telephone, by post, by delivering
and collecting the questionnaire personally or, as is increasingly the case in organisations
annual staff attitude surveys, by an online questionnaire. Some organisations use
questionnaires developed by external organisations to evaluate SHRM interventions and
benchmark themselves against other organisations, for example Cooper, et al.s (1994)
Occupational Stress Indicator. Others, either use consultants to develop a bespoke
questionnaire, or develop their own in house. However, before deciding to use a
questionnaire we would like to include a note of caution. Many authors (for example
Oppenheim, 2000; Bell, 2005) argue that it is far harder to produce a questionnaire that
collects the data you need than you might think. Each question will need to be clearly worded
and, for closed questions (Key Concepts 4.5), possible responses identified. Like other data
collection techniques, the questionnaire will need to be pilot tested and amended as
necessary. The piloting process is of paramount importance as there is unlikely to be a second
chance to collect data. Even if finance for another questionnaire were available,people are
unlikely to be willing to provide further responses.

The choice of evaluator

Another important aspect is identifying who should undertake the evaluation.A choice
must be made between an evaluator who is an employee of the organisation or an external
consultant and, within that, whether the individual needs to be experienced and/or trained as
an evaluator, termed in Table 4.3 professional. Table 4.3 highlights some of the factors that
may be associated with each choice.

Analysing and feeding back

Analysis and feedback are important in both typical evaluations and action research.
Typical evaluation ends usually with the report and a presentation to the sponsor of findings
from the analysis. In contrast, an evaluator involved in an action research project, perhaps as
a process consultant, is likely to be involved also in developing actions and revising the
intervention as necessary.

Analysis

Analysis of data is obviously a precursor to feedback. While a full discussion of the


techniques available is outside the scope of this chapter, some key observations can be made.
The most important of these is to ensure that the evaluation is undertaken against the agreed
objectives (see Clarity of purpose and evaluation strategiesearlier in this chapter).

Analysis of large amounts of data, whether quantitative or qualitative, will inevitably


involve the use of a personal computer. It almost goes without saying that those undertaking
this analysis should be familiar with the analysis software for quantitative data (e.g.SPSS) or
qualitative data (e.g.NVivo) used.However,we believe it is important that those who are
going to analyse the data are also involved in the design stages of the evaluation. People who
are inexperienced often believe it is a simple linear process in which they first collect the data
and then a person familiar with the computer software shows them the analysis to carry
out.This is not the case and it is extremely easy to end up with data that can only be analysed
partially. If objectives, strategy, data collection techniques and analysis had been better
integrated then data could analysed more easily. Another pitfall is that readily available
software for data analysis means it is much easier to generate what Robson (2002: 393)
describes concisely as elegantly presented rubbish.

Feeding back

Findings based upon data analysis are fed back to the sponsor, usually in the form of a
report and, perhaps, a presentation. Where the report contains findings that may be
considered critical of the organisation in some way, this may create problems. However, we
would argue that to maximise benefit it is important that these findings are fed back rather
than being filtered so as not to offend.Typically,especially where large groups are involved, a
summary of the feedback is cascaded from the top down the organisation. This may make use
of existing communication structures such as newsletters, notice boards and team briefings.
Alternatively, if rapid feedback is required then additional newsletters and team briefings
might be used. The evaluation sponsor sees the full report of the findings first. Subsequently,
a summary may be provided for circulation to all employees or posted on the organisations
intranet. As part of the team briefing process, each managerial level within the hierarchy is
likely to see its own data and is obliged to feed the findings down to its own subordinates.
Managers at each level are expected subsequently to report about what they are doing about
any problems identified,in other words the actions they intend to take.

Schein (1999) argues that such a top-down approach may be problematic as it


reinforces dependency on the organisations hierarchy to address issues identified. If some
issues raised by the evaluation are ignored, then employee morale may go down. It also
places managers in a difficult position as they are in effect telling their subordinates about
issues that the subordinates thought were important. Then they tell them what they (the
managers) are going to do about it.
Instead, Schein advocates an alternative of bottom-up feedback that he argues also
helps to promote change from within. In bottom-up feedback, data are shared initially with
each workgroup that generated them. This process concentrates upon understanding the data
and clarifying any concerns. Consequently the focus is on the evaluation rather than the
whole organisation. Issues arising from the data are divided into those that can be dealt with
by the group and those that need to be fed back to the organisation. The workgroup is
therefore empowered by more senior managers to deal with problems, rather than being
dependent upon the organisations hierarchy. Feedback continues with each group in an
upward cascading process. Each organisational level therefore only receives data that
pertain to their own and higher levels. Each level must think about issues and take
responsibility for what they will work on and what they will feed back up the line. Schein
(1999) argues that this helps build ownership, involvement and commitment, and signals
managements wish to address the issues. In addition, it emphasises that higher levels of the
organisation only need to know about those things that are uniquely theirs to deal with.While
it may take longer to get data to the top level, Schein believes that this approach is quicker for
implementing actions based upon the evaluation.

Thus,a top-down approach to disseminating evaluation findings can enable relatively


rapid communication.It also allows management to maintain control of the process and
decide the nature of the message, who receives it and any actions that will be taken. By
contrast, a bottom-up approach involves employees thinking about issues, deciding and
taking responsibility for the actions they will take,and selecting those issues they need to feed
back to their line managers. The latter is inevitably more time consuming, but will only work
where an organisations culture allows employees to be empowered by managers to take
ownership of the evaluation and any forthcoming actions. However, it would be wrong to
think of these two approaches as mutually exclusive. Rather, the approach to feedback like
the rest of the evaluation of SHRM needs to be tailored to the precise requirements of the
organisation.

Vous aimerez peut-être aussi