Académique Documents
Professionnel Documents
Culture Documents
management;
identify the range of different purposes an evaluation can serve;
assess the barriers to evaluation and their causes;
identify the various stakeholders in any evaluation and their need both to contribute and to
receive feedback;
assess the choices to be made in respect of the evaluation process and make suitably informed
decisions;
outline a range of strategies and data collection techniques involving both primary and
secondary data, which may be used to evaluate strategic human resource management;
identify the complexity of issues associated with feeding back the findings of evaluations.
Introduction
Within all of our lives we are constantly evaluating as part of our normal daily
activities. As this book was being researched the value or worth of many academics work
in the field of strategic human resource management (SHRM) was assessed. As this book is
being read its contents will be evaluated, the style in which it is written, the design and
pedagogic features and the ideas and issues that are raised. The relevance of the contents
will probably be evaluated to the readers context, her or his ability to understand the
material and the suitability of the place and time in which it is being read. Such personal
or informal evaluations of situations and experiences affect ones perception of the world
and underpin the choices that are made.
Given that this process is so central to what people do, it is perhaps surprising that
evaluation, as a planned and formal activity, appears to be so problematic in an
organisational context. Effective evaluation and the promotion of HR strategies requires the
systematic collection of data that are subsequently analysed, the findings being presented in
a meaningful and useful form. These data may have been collected through monitoring what
is happening within an organisation over time, perhaps as part of a balanced scorecard
approach to performance management or an HR information system, or specifically to
evaluate a particular strategic HR intervention. However, without the ensuing evaluation,
the effectiveness of one or a series of strategic HR interventions may be unclear. The
knowledge and understanding gained through this process of research, the authors would
argue, enables organisations to have a clearer understanding of the impact of different
strategies and, of equal importance, to adjust their HR interventions to help promote these
strategies. Yet despite this, it is widely recognised that the implementation of HR strategies
has rarely included a planned evaluation (Doyle et al., 2000) and, on those occasions that
they do, the findings are rarely utilised at all, let alone strategically.
Toracco (1997) argues that this lack of evaluation is due to difficulties associated with
the long timeframes required for strategic change in organisations. In particular, it is often
difficult to be certain of the precise impact of specific interventions. These observations are
supported by others (for example Randell, 1994; Skinner, 2004) who emphasise the
difficulty of designing evaluation studies and obtaining data of sufficient quality to
disentangle the effect of a specific HR intervention from other stimuli. Even in the context
of training and development, where there is a great deal of literature written about the need
to evaluate and a widespread acceptance among practitioners of its importance, the reality is
that evaluation rarely progresses beyond the end of course happy sheet, asking participants
about the operational practicalities of the training experience. Despite such problems, the
need for evaluation is emphasised by models of strategy formulation and development.
These usually incorporate an information gathering and analysis stage, which emphasises
the importance of knowing and understanding an organisations current situation as part of
developing strategy. For example, as was seen in Chapter 1, top-down development of
strategy involves a number of steps, including analysing an organisations environment and
its internal resources, within which it is recognised that this information is likely to be
incomplete and therefore imperfect. Similarly, the processual approach to strategy
development involves reflection, evaluation and understanding.
Such research is likely to include data on the external environment as well as internal
objectives, organisational (including HR) capabilities and the need to communicate within
the organisation (Prahalad and Hamel, 1990).
Typical scientific approaches to research often seek to minimise the amount of
involvement between those collecting and analysing the data (the researchers) and those
from whom data are collected on the grounds of maintaining objectivity (Robson, 2002).
Within such a scientific evaluation, evaluators are seen as separate from, rather than
working alongside, an organisation. Findings are disseminated only to the sponsor of the
evaluation rather than all those affected. This conflicts with much that we discuss in this
book in terms of the importance of an employee involvement to develop strategy ownership
and understanding (Chapter 12). Such conflict is, perhaps, not too surprising if the purpose
attributed to the evaluation is to understand and explain. We would argue that evaluation can
also provide useful insights about HR issues associated with specific strategies. However, in
evaluating and promoting HR strategies, we believe it is often necessary for the person
undertaking the evaluation to be within or to become part of the organisation. Analysis of
data collected should not take place in a vacuum and judgements need to be made within the
context of the organisation.
In this chapter we therefore argue that the evaluation of HR strategies needs to
involve those affected within the organisation as fully as possible. This is not to say that
evaluation can only be undertaken by people within the organisation. Rather it implies that
where people external to the organisation are used, their role should be to help those within
to perceive, understand and act to improve the situation; an approach akin to Scheins
(1999) process consultation. As part of this it is recognised that, depending upon the
purpose of the evaluation, one or a number of research strategies might be more appropriate.
Evaluation may take place over a range of time horizons. These can range from one-off case
studies, perhaps answering the question Where are we now?, through cross-sectional
studies which benchmark HR practices, to longitudinal evaluations perhaps using a series of
employee attitude surveys. Similarly, we recognise that to address particular strategic
objectives some data collection techniques are likely to collect more appropriate data than
others. For example, a questionnaire survey of employees is less likely to discover their in-
depth feelings about a recent downsizing than face-to-face interviews in which an
interviewer takes time to gain the employees confidence.
This chapter begins by considering the nature of evaluation in terms of what it is, its
benefits and why it may not be undertaken (Figure 4.1). The choices that have to be made
about the process once a decision has been made to evaluate are then explored.
Commencing with a discussion of purpose and context, typical and action research
approaches are considered and different evaluation strategies and data gathering techniques
outlined. The implications of different tools and techniques, and issues relating to the
feeding back of findings, are then discussed. However, we would stress that while this
chapter provides an overview of a range of evaluation strategies and techniques, there is still
a need to read far more widely about these prior to undertaking an evaluation yourself. This
is because the space available is insufficient to enable us to explain the strategies and
techniques in sufficient detail.
If one were to ask a group of people to define evaluation, it would probably be found
that the term meant different things to individuals depending on their background and
experience.As the thinking and practice relating to evaluation have evolved over the last 40
years, a variety of strategies and data collection techniques has developed and there has been
a range of definitions to match. Patton (1997: 192), for example, identifies 57 types of
evaluation as illustrativeof the many types available! Key issues in differentiating between
evaluation types include:
Inevitably, given the turbulent business environment of the late twentieth and early
twenty-first centuries, organisations have to contend with a world in which the only constant
is change (Carnall, 1995). In this context, an important aspect of SHRM is to ensure that the
organisation is able to respond in a timely and positive manner to its internal and external
environments. As early as 1987, Guest identified a key role for HR managers within the
global scenario of organisations faced with increasing external and internal pressures, when
he asserted that the capacity to implement strategic plans was an important feature of
successful SHRM. Since then, the literature has frequently argued for the strategic role of the
HR function and authors such as Purcell (1999), Tyson (1999) and Ulrich (1998) have
emphasised the need to manage human resources strategically in order that an organisations
capacity for change can be improved. Tyson (1999) maintains that HR managers are major
players in the creation of organisational capability. He also suggests that evaluation is an area
where the HR function can make a significant contribution to increase understanding of the
appropriateness of interventions,in effect to learn from experience,and to change strategy as a
consequence.
Other authors specifically identify important contributions that the inclusion ofa
planned process of evaluation can make to successful SHRM. Love (1991), for example,
outlines the role of effective evaluation in improving management decision-making through
the provision of information and the development of shared understanding. Kirkpatrick
(1985) argues the importance of feedback in gaining acceptance and commitment to
organisational initiatives, while Carnall (1995) suggests that people need information to
understand new systems and their place in them. Preskill and Torres (1999) argue that
evaluative inquiry helps organisation members reduce uncertainty, clarify direction, build
community, and ensure that learning is part of everyones job. The sharing of information,
they argue, is essential if new insights and mutual understanding are to be created.
Many of these benefits relate to the common themes of information gathering and
developing shared understanding. At the start of this chapter, the reality that every individual
evaluates on a personal basis and how the same is true in relation to our individual
organisational experiences was highlighted. Reichers et al. (1997) argue that people need to
understand not only the reasons for change but also its ongoing progress and its results.
Individuals at all levels will make their own assessments, constructing their own reality
relating to the necessity for, and the effectiveness of, new initiatives and strategies, often even
when these do not affect them directly. In some cases, these views are shared and tested with
colleagues but, despite this, much may remain tacit rather than explicit.Yet individualsfuture
actions will almost certainly be influenced by these assessments,even when they are based
solely on personal perceptions and the subjective evaluations that result from relatively
narrow perspectives.
From reading and thinking about the arguments put forward in the previous subsection and
answering Self-Check Question 4.2, you will have identified a range of reasons to justify the
evaluation of SHRM. If asked, most mangers would probably come up with a similar list. Yet,
despite the benefits that evaluation can provide, the reality for many organisations is that
evaluation simply does not happen. Reasons given for this can be grouped into three
overlapping categories:
Linked to the perception that some things are impossible to measure is the, often
unfounded, assumption that an organisations members do not possess the necessary skills to
produce a credible or competent evaluation. This may lead to a belief that to acquire the
necessary skills is likely to be both time consuming and costly or that an evaluation must
involve the use of expensive external consultants.
In situations where managers are under increasing pressure, quick fixes may be
attractive (Swanson, 1997) and it may appear easier to imitate rather than innovate (Brunsson
and Olsen,1998). This unquestioned belief also serves to reduce the perceived need to
evaluate formally as those responsible already knowthe effect will be positive (Skinner,
2004). The power and impact of such assessments is recognised widely in the literature.
Writing about the measurement of business excellence, Kanji (2002) observes that gut
feelings rather than fact and measurement are the basis of too many management decisions.
Such informal evaluations are made on the basis of unverified information, experience,
instinct or the opinion of the most influential people rather than on information extracted
correctly from reliable data (Conti,1997).Easterby-Smith (1994) notes also a preference of
managers, particularly at senior levels, for information received via their own informal
information channels and observes that this information tends to be far more influential than
that produced via more formal channels. Clark and Salamans (1998) ideology of
management reinforces the belief of managers in the value of their own judgements, making
them unlikely to question their own interpretations or to acknowledge the limits of their own
understanding. These factors serve not only to undermine a perceived need for a planned
evaluation but may also mean that objectives and expected outcomes of SHRM initiatives are
unclear and unarticulated.
Allied to the discussion of the lack of a perceived need to evaluate is the implicit
recognition that,for many,evaluation is an afterthought.You will probably have noticed that,
in many of your HRM textbooks, discussion of evaluation occurs rarely and, if it does, tends
to appear towards the end. The implication is that evaluation of HR strategies and associated
interventions is something that is only thought about after the event has happened and is not
central to the implementation process.However,the end of the implementation is the point at
which those who have been involved are likely to be looking towards the next project and
evaluation of what is perceived as a past event is not high on their personal agenda. In
addition, if an evaluation process is not included in implementation plans from the beginning
it is unlikely that thought will have been given to the systems, processes and resources that
will be needed to collect information that will enable the evaluation to take place.
The difficulties associated with dealing with negative outcomes provide the final
category of reasons why evaluation of SHRM initiatives may not happen. For many HR
managers, their previous experience of evaluation has been negative and divisive rather than
as a positive process of improvement and shared learning.This is largely due to the blame
culture which exists in a wide range of companies (In Practice 4.3). Bloomfield (2003)
characterises such cultures as ones in which it is sensible to keep your head down,cover your
back,do your best to hide mistakes,or,at the very least,ensure there is always someone else to
share the blame. Not surprisingly, the expectation is that any planned, explicit evaluation will
inevitably focus upon accountability and this will inexorably lead to criticism and the
apportioning ofresponsibility for failure.As Tyson (1999) notes,managers are more than
passive bystanders when it comes to the importa tion of new ideas, often selecting,
reinterpreting and giving relative emphasis to ideas according to their own agendas. For these
reasons, they become the obvious targets in a blame culture.Inevitably on this basis,defensive
reasoning and routines at both an individual and an organisational level (Argyris, 1994) are
unlikely to encourage the pursuit of planned, explicit, evaluation of strategic human resource
management. HR managers are likely to be aware of the risks involved in terms both of
personal criticism and of the activity being unpopular with peers and others who may also
feel vulnerable.
The question that most probably arises at this stage is:Which aspects of SHRM
should be evaluated? It would, perhaps, appear unconvincing to answer all of them.
However, if an HR intervention is of strategic importance to an organisation, then the
organisation needs to evaluate its impact. For some aspects of SHRM, such as recruitment
and selection (Chapter 8) and human resource development (Chapter 10), data may already
be collected and held on the organisations human resource information system (In Practice
4.4). In such instances,the key issues will be ensuring that:
The purpose of evaluation Business research textbooks, for example Saunders et al.
(2007), often place research projects on a continuum according to their purpose and context
(Figure 4.2).At one end of this continuum are evaluations undertaken to advance knowledge
and theoretical understanding of processes and outcomes including SHRM.This basic
research is therefore of a fundamental rather than applied nature, the questions being set and
solved by academics with very little, if any, focus on use of research findings by HR
managers. At the other end are evaluations that are of direct and immediate relevance to
organisations, address issues which they consider to be important and present findings in
ways which can be understood easily and acted upon. This is usually termed applied research
and is governed by the world of practice. The evaluation of SHRM is, not surprisingly, placed
towards the applied end of this continuum. Such research is oriented clearly towards
examining practical problems associated with strategic HR interventions or making strategic
decisions about particular courses of action for managing human resources within
organisations. It therefore includes monitoring of operational aspects of HRM such as
absence (Chapter 9), turnover and recruitment (Chapter 8), thereby helping organisations to
establish what is happening and assess effectiveness of particular HR interventions.
Within any evaluation,it is important to establish the purpose of both the evaluation
and the strategic HR intervention or process that is being evaluated. Easterby-Smith (1994)
defines four possible purposes of an evaluation but cautions that it is unrealistic to expect any
evaluation to serve more than one.These are:
Not surprisingly, given the discussions in the previous paragraph, it is also important
to identify who the stakeholders are in relation to the HR initiative being evaluated and the
evaluation itself.Any changes resulting from the evaluation are likely to affect a range of
individuals and groups who are likely to be both internal and external to the organisation.
These stakeholders are likely to differ in the extent of their interest, their priorities and their
level of influence for, as Easterby-Smith (1994) observes, any evaluation process is a
complex one which cannot be divorced from issues of power, politics and value
judgements.Organisational politics can affect any stage ofan evaluation (Russ-Eft and
Preskill, 2001) from influencing the decision to evaluate through to the way in which the
findings are used. In any evaluation there is likely to be a commissioning or dominant
stakeholder or stakeholder group (often referred to as principal client or sponsor) and in an
organisational setting this is often a management group. However, the nature of HRM
strategies and initiatives is such that in each case there will be others who have both a
legitimate interest and a stake in the findings of any evaluation process. Not least,those
managers and employees whose participation in the evaluation is necessary and who, through
sharing their views and experience, might feel that they were owed access to the findings they
had helped create as well as those who will use the evaluation findings.
Evaluation can take place at a number of different levels based upon its focus.
Although the labels used differ, evaluation models in effect distinguish between levels that
range from an operational level measuring reactions through to a more strategic focus on
organisational performance. Developed in 1959 in relation to training
evaluation,Kirkpatricks model is still the most widely recognised (Phillips,1991;Russ-Eft
and Preskill, 2001). This highlights that evaluation can be undertaken to assess operational
interventions (level 1 and occasionally 2), to support medium-term or tactical interventions
(level 3 and occasionally level 2) or it can focus on more strategic interventions (level 4). As
can be seen from Table 4.2, Kirkpatricks first two levels focus on the effectiveness of the
intervention as judged by the recipients. They relate primarily to the operational level and
while they may affect the design ofthe HR intervention,such as a training event, evaluation at
these two levels is unlikely to have any strategic impact. In contrast, evaluation at levels 3
and 4 have an increasingly strategic impact. These consider the effect of the process or policy
being evaluated on the wider organisation, the achievement of its goals and ultimately the
implications for organisational performance. Subsequent researchers, for example Hamblin
(1974), have added a further strategic level of evaluation (5) which focuses on the wider
contribution the organisation is now able to make. However, despite these observations, it is
worth noting that Kirkpatrick did not describe his model originally as hierarchical or suggest
that one level would impact upon the next.
In terms of approach, as with any evaluation, decisions have to be made regarding the
techniques to be used to collect data. These should reflect the choices that have been made in
relation to the topics covered in the preceding section. In terms of evaluation, the use of
sound method and reliable data are critical (Stern, 2004). The decision about purpose will
also determine whether the evaluation needs to be formative or summative in nature.A
formative evaluation takes place during the implementation with the intention of feeding back
into the process and improving both the process and the outcomes of the initiative where
appropriate.A summative evaluation usually occurs at the end of the implementation and is
about determining the worth or value of what has been done, whether success criteria were
met, if the results justified the cost. It would therefore be possible to undertake both a
formative and summative evaluation ofthe same initiative.
One way of helping ensure clarity of purpose and objectives is to spend time
establishing and agreeing these with the sponsor of the evaluation. This is unlikely to be as
easy as it might seem and will be time consuming. As part of the process, it can be argued
that it is essential to ensure that both the person undertaking the evaluation and the sponsor
have the same understanding. Another, and equally important, aspect of ensuring clarity of
purpose relates to the understanding and insight the person undertaking the evaluation brings.
While her or his previous experience is likely to be important, this understanding is also
likely to be drawn from reading about others experiences in similar situations; a process
more often referred to as reviewing the literature. Indeed, your reading of this book is based
upon the assumption that you will be able to apply some of the theories, conceptual
frameworks and ideas written about in this book to your strategic management of HR.
Evaluation strategies
Once a clear purpose for evaluation has been established,a variety ofevaluation
strategies may be adopted. Typically, evaluations are concerned with finding out the extent to
which the objectives of any given action, activity or process, such as the introduction of a
new training intervention,has been achieved. In other words,it is concerned with testing the
value or impact of the action,activity or process,usually with the view to making some form
of recommendation for change (Clarke,1999).As part of the evaluation,it is necessary to
gather data about what is happening or has happened and analyse them. This can be
undertaken either as a snapshot or longitudinally using a variety of data collection techniques
such as interrogating existing HR databases, interview, questionnaire and observation.
Findings based upon the analysis of these data are subsequently disseminated back to the
sponsor whose responsibility it is to take action (In Practice 4.5). Consequently,there is no
specific requirement upon those involved in the research to take action (Figure 4.3).This is in
contrast to action research which will be discussed later.
Saunders et al. (2007) emphasise that, when making choices, what matters is not the
label attached to a particular strategy,but whether the strategy is appropriate to the purpose
and objectives. In particular, the use of a sound evaluation strategy and the collection of
reliable data are critical (Stern, 2004). Four evaluation strategies tend to be used in the
evaluation of SHRM: survey; case study; experiment; existing (secondary) data.
Each strategy should not be thought of as mutually exclusive, for example a case
study in an organisation may well involve using a survey to collect data and combine these
data with existing (secondary) data from the organisations HR information system. Similarly,
an experimental design, such as testing the relative impact of a number of different HR
interventions, will often be undertaken using a number of different case studies. In addition,
these strategies can be applied either longitudinally or cross-sectionally. The main strength of
a longitudinal perspective is the ability it offers to evaluate the impact of SHRM interventions
over time. By gathering data over time, some indication of the impact of interventions upon
those variables that are likely to affect the change can be obtained (Adams and Schvaneveldt,
1991). In contrast a crosssectional perspective seeks to describe the incidence of a particular
phenomenon or particular phenomena, such as the information technology skills possessed by
managers and their attitude to training,at one particular time.
Surveys are perhaps the most popular strategy for obtaining data to evaluate SHRM
interventions.Using this strategy,a large amount of data can be collected from a sizeable
population in an economic way (Saunders et al., 2007). This strategy is often based around a
questionnaire.Questionnaires enable standardised data to be collected,thereby allowing easy
comparison. They are also relatively easily understood and perceived as authoritative by most
employees.However,the questionnaire is not the only data collection technique that can be
used within a survey strategy. Structured observations, such as those frequently associated
with organisation and methods (O&M) evaluations and structured interviews involving
standardised questions can also be used.
Survey questions can be put to both individual employees and groups of employees.
Where groups are interviewed, their selection will need to be thought about carefully. We
would advocate taking a horizontal slice through the organisation to select each group. By
doing this, each member of an interview group is likely to have similar status. In contrast,
using a vertical slice would introduce perceptions about status differences within each group
(Saunders et al.,2007).
Robson (2002: 178) defines case study as a strategy for doing research which
involves an empirical investigation of a particular contemporary phenomenon within its real
life context using multiple sources of evidence. This strategy is widely used when
researching the impact of HR interventions within an organisation or part of an organisation
such as a division. The data collection techniques used can be various including interviews,
observation, analysis of existing documents and, like the survey strategy, questionnaires.
However, this is not to negate the importance of comparative work, benchmarking or setting
a case study in a wider organisational, industrial or national context. This might be achieved
by combining a case study strategy with the analysis of existing (secondary) data that have
already been collected for some other purpose.
The increasing use of existing data as a strategy to evaluate SHRM has been
facilitated by the rapid growth in computerised personnel information systems over the past
decade. These relational databases store HR data in a series of tables. Each table can be
thought of as a drawer in a conventional filing cabinet containing the electronic equivalent of
filing cards. For example,one table (drawer) may contain data about individual employees.
Another table may contain data about jobs within the organisation, another about grades and
associated salaries, while another may contain data on responses to recruitment
advertisements. These tables are linked together electronically within the database by
common pieces of data such as an employees name or a job title.
Although these data have been collected for a specific purpose, they can also be used
for other purposes. Data collected as part of performance and development appraisals might
be combined with data on competences and recruitment and selection to support the
development of a talent management strategy.As part of this, reports would be produced
which match current employees profiles with future requirements due to likely retirements
within the organisation,thereby highlighting specific training needs.
External sources of secondary data tend to provide summaries rather than raw data.
Sources include quality daily newspapers, government departments surveys and published
official statistics covering social, demographic and economic topics. Publications from
research and trade organisations such as the Institute for Employment Studies at Sussex
University and Income Data Services Ltd, cover a wide range of human resource topics,such
as performance-related pay and relocation packages.
The second emphasises the iterative nature of the process of diagnosing, planning,
taking action and evaluating (Figure 4.4). The action research spiral commences within a
specific context and with a clear purpose. This is likely to be expressed as an objective
(Robson, 2002). Diagnosis, sometimes referred to as fact finding and analysis, is undertaken
to enable action planning and a decision about the actions to be taken. These are then taken
and the actions evaluated (cycle 1).Subsequent cycles involve further diagnosis taking into
account previous evaluations, planning further actions, taking these actions and evaluating.
The third theme relates to the involvement of practitioners in the evaluation and in
particular a collaborative democratic partnership between practitioners and evaluators, be
they academics, other practitioners or internal or external consultants. Eden and Huxham
(1996: 75) argue that the findings of action research result from involvement with members
of an organization over a matter which is of genuine concern to them. Therefore, the
evaluator is part of the organisation within which the evaluation and change process are
taking place (Coghlan and Brannick, 2005) rather than more typical evaluation where,for
example,survivors of downsizing are subjects or objects of study. Finally, action research
should have implications beyond the immediate project; in other words it must be clear that
the results could inform other contexts. For academics undertaking action research the Eden
and Huxham (1996) link this to an explicit concern for the development of theory. However,
they emphasise that for both internal and external consultants this is more likely to focus
upon the subsequent transfer of knowledge gained from one specific context to another. Such
use of knowledge to inform other contexts, the authors believe, also applies to others,
undertaking action research such as practitioners.
Thus action research differs from typical evaluation due to: a focus upon change; the
recognition that it is an iterative process involving diagnosis, planning,action and evaluation;
the involvement of employees (practitioners) throughout the process.
Action research can have two distinct foci (Schein, 1999). The first of these, while
involving those being evaluated in the process, aims to fulfil those undertaking the
evaluations agenda rather than that of the sponsor. This does not, however, preclude the
sponsor from also benefiting from any changes brought about by the evaluation.The second
starts with the needs of the sponsor and involves those undertaking the evaluation in the
sponsors issues, rather than the sponsor in their issues. Such consultant activities are termed
process consultation by Schein (1999). The consultant, he argues, assists the client to
perceive, understand and act upon the process events that occur within their environment in
order to improve the situation as the client sees it. Within this definition, the term client
refers to the persons or person, often senior managers, who sponsor the evaluation. Using
Scheins analogy of a clinician and clinical enquiry, the consultant (evaluator) is involved by
the sponsor in the diagnosis (action research) which is driven by the sponsors needs. It
therefore follows that subsequent interventions are jointly owned by the consultant and the
sponsor, who is involved at all stages. The process consultant therefore helps the sponsor to
gain the skills of diagnosis and fixing organisational problems so that he or she can continue
to improve the organisation on his or her own.
Schein (1999) argues that the process consultation approach to action research is more
appropriate because it better fits the realities of organisational life and is more likely to reveal
important organisational dynamics.However,it is still dependent upon obtaining data which
meet the purpose of the change evaluation and are of sufficient quality to allow causal
conclusions to be drawn. Many authors (for example Randell, 1994), have emphasised the
difficulties associated with obtaining such data. For example,the data collected may well be
unreliable,such as where there are low response rates, or in the case of HR information
systems,incomplete or out of date.We now turn to the need to identifying appropriate
techniques for obtaining credible data whichever type pf evaluation is being undertaken.
The data gathering techniques used need to be related closely to the purpose of the
evaluation. For all change evaluations, it is important that data appear credible and actually
represent the situation. This issue is summarised by Raimond (1993: 55) as the How do I
know? test and can be addressed by paying careful attention to data gathering techniques.
This is especially important for longitudinal evaluation as techniques used and the questions
asked at the start of a change process may be inappropriate if transformational change has
occurred (Golembiewski et al., 1976). For cross-sectional evaluations or evaluations over
shorter time periods, the likelihood of techniques and questions no longer being appropriate is
far lower. For example, the impact of a management development programme might be
evaluated using data gathered by one or a number of different techniques dependent upon the
focus ofthe analysis.These might be crosssectional,such as in-course and post-course
questionnaires and observations by trainers and others, and/or longitudinal attitude surveys
and psychometric tests before and after the event.Secondary data regarding employee
performance,already gathered by existing appraisal systems,might also be used.
Using observation
Where evaluation is concerned with what people do, such as how they respond after a
particular training intervention, an obvious way to collect the data is to observe them. This is
essentially what observation involves: the systematic observation, recording description,
analysis and interpretation of peoples behaviour (Saunders et al., 2007). There are two main
types of observation: participant and structured. Participant observation is qualitative and
derives from the work of social anthropologists in the twentieth century.It has been used
widely to study changing social phenomena.As part of this,the evaluator attempts to become
fully involved in the lives and activities of those being evaluated and shares their experiences
not only by observing but also by feeling those experiences (Gill and Johnson,2002).
By contrast structured observation is quantitative and, as its name suggests, has a high
level of predetermined structure. It is concerned with the frequency of actions, such as in time
and motion studies, and tends to be concerned with fact finding. This may seem a long way
from the discussion of evaluating SHRM with which this chapter began.However,re-
examining the typical evaluation and action research processes (Figures 4.3 and 4.4)
emphasises that this is not the case. Both these processes require facts (data to be collected or
diagnosis to take place) before evaluation can occur and, in the case of action research,action
taken.
Using interviews
Saunders et al. (2007) argue that structured interviews are in reality a form of
questionnaire. This is because, like other ways of administering questionnaires, the
respondent is asked to respond to the same set of questions in a predetermined order (de Vaus,
2001). Because each respondent is asked to respond to the same set ofquestions,a
questionnaire provides an efficient method of gathering data from a large sample prior to
analysis.
Questionnaire data may also be collected over the telephone, by post, by delivering
and collecting the questionnaire personally or, as is increasingly the case in organisations
annual staff attitude surveys, by an online questionnaire. Some organisations use
questionnaires developed by external organisations to evaluate SHRM interventions and
benchmark themselves against other organisations, for example Cooper, et al.s (1994)
Occupational Stress Indicator. Others, either use consultants to develop a bespoke
questionnaire, or develop their own in house. However, before deciding to use a
questionnaire we would like to include a note of caution. Many authors (for example
Oppenheim, 2000; Bell, 2005) argue that it is far harder to produce a questionnaire that
collects the data you need than you might think. Each question will need to be clearly worded
and, for closed questions (Key Concepts 4.5), possible responses identified. Like other data
collection techniques, the questionnaire will need to be pilot tested and amended as
necessary. The piloting process is of paramount importance as there is unlikely to be a second
chance to collect data. Even if finance for another questionnaire were available,people are
unlikely to be willing to provide further responses.
Another important aspect is identifying who should undertake the evaluation.A choice
must be made between an evaluator who is an employee of the organisation or an external
consultant and, within that, whether the individual needs to be experienced and/or trained as
an evaluator, termed in Table 4.3 professional. Table 4.3 highlights some of the factors that
may be associated with each choice.
Analysis and feedback are important in both typical evaluations and action research.
Typical evaluation ends usually with the report and a presentation to the sponsor of findings
from the analysis. In contrast, an evaluator involved in an action research project, perhaps as
a process consultant, is likely to be involved also in developing actions and revising the
intervention as necessary.
Analysis
Feeding back
Findings based upon data analysis are fed back to the sponsor, usually in the form of a
report and, perhaps, a presentation. Where the report contains findings that may be
considered critical of the organisation in some way, this may create problems. However, we
would argue that to maximise benefit it is important that these findings are fed back rather
than being filtered so as not to offend.Typically,especially where large groups are involved, a
summary of the feedback is cascaded from the top down the organisation. This may make use
of existing communication structures such as newsletters, notice boards and team briefings.
Alternatively, if rapid feedback is required then additional newsletters and team briefings
might be used. The evaluation sponsor sees the full report of the findings first. Subsequently,
a summary may be provided for circulation to all employees or posted on the organisations
intranet. As part of the team briefing process, each managerial level within the hierarchy is
likely to see its own data and is obliged to feed the findings down to its own subordinates.
Managers at each level are expected subsequently to report about what they are doing about
any problems identified,in other words the actions they intend to take.