Vous êtes sur la page 1sur 7

INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT VOLUME 14 NUMBER 3 SEPTEMBER 2006

Information Exchange Article


Scale Properties of the Team Role
Self-Perception Inventory
Stephen Swailes* Aitor Aritzeta
University of Hull The University of the Basque Country

We present an analysis of the dimensionality of the scales that assess the nine team roles
contained in the Team Role Self-Perception Inventory. Using a data set of over 14,000
respondents, reasonable fit to seven-item unidimensional factor models was obtained for
all scales except Implementer and Specialist. Two-factor structures for all scales showed
improvements in model fit although for all roles a small and unreliable second factor was
found. Bi-dimensional structures reflect the separate loading of negatively worded items
and/or different item content areas. Five-item scales provide a more economical version of
the inventory and areas for further development of the instrument are identified.

Introduction research studies is less common. For exceptions see


O’Doherty (2005) and Senior (1997).

O rganizations rely heavily on teamwork to sustain a


competitive position in their marketplace. The
notion that individuals display distinctive but natural roles
The first version of the TRSPI (Belbin, 1981) measured
preferences towards eight team roles but, following
revisions, a ninth role was added such that the current
in work teams seems widely acknowledged as several version measures preferences towards nine roles (Belbin,
attempts have been made to develop typologies of team 1993). Despite some initial, negative assessments of its
roles (Davis, Millburn, Murphy, & Woodhouse, 1992; psychometric properties (e.g., Furnham, Steele, & Pendle-
Margerison & McCann, 1990; Parker, 1990; Spencer & ton, 1993a; Furnham, Steele, & Pendleton, 1993b) the
Pruss, 1992). One typology of nine team roles that is widely TRSPI continues to be used widely in training and
used in Europe and elsewhere was produced by Meredith development programmes and is used in studies into the
Belbin in the United Kingdom following his research into relationships between team roles and other variables. See
the effects of team composition on team performance Aritzeta, Swailes, and Senior (in press) for a review of
(Belbin, 1981, 1993; Belbin, Aston, & Mottram, 1976). empirical studies using the inventory.
The nine roles are illustrated in Appendix A.
The typology is operationalized through the Team Role
Self-Perception Inventory (TRSPI) which produces an Instrument Structure and Properties
assessment of a person’s preferences towards each of the One of the difficulties facing researchers who want to
roles in terms of a rank ordering. Use of the TRSPI should assess the TRSPI’s properties stems from its structure.
be accompanied by a parallel instrument, the Observer Seven items are used to represent each team role making 63
Assessment Sheet (see Senior & Swailes, 1998), which asks items in total plus seven items that form a ‘‘dross’’ scale.
people who know the person being assessed to select The instrument is divided into seven sections with each
adjectives that they feel best describe them. Perceptions section containing one item per role and one ‘‘dross’’ scale
from self and others are, ideally, compared and used in item. Respondents distribute 10 points between the 10
discussions about one’s team role in the context of personal items in each section such that, usually, three to five items
and team development. While the Observer sheet is widely are scored leaving several unscored. Thus total scores
used in training and development situations, its use in across the instrument are always equal (70) and, as such,
the inventory produces data with ipsative properties.
*Address for correspondence: Stephen Swailes, Hull University These properties mean that negative correlations be-
Business School, University of Hull, Cottingham Rd, Hull HU6 tween scale scores are inevitable and the average correla-
7RX, England. E-mail: s.swailes@hull.ac.uk tion between ipsative measures is 1/(k 1) where k is the

r 2006 The Authors


Journal compilation r 2006 Blackwell Publishing Ltd, 9600 Garsington Road,
292 Oxford, OX4 2DQ, UK and 350 Main St, Malden, MA 02148, USA
SCALE PROPERTIES OF THE TRSPI 293

number of measures, in this case nine (Dunlap & Cornwell, roles reflecting the scales given the highest scores. If only
1994). In addition, due to scoring restraints, ipsative scores that are given to the highest ranked roles are
instruments should only be used for intra-individual analysed then the level of interdependence within these
comparison. This arises as ipsative scoring shows only a items will be at most very small. Furthermore, the deletion
person’s relative position among the several dimensions of the scores given to any ‘‘dross’’ scale items helps to
being scored. We do not know the absolute strength of one increase the level of variability in total scores that will also
person’s preference for a particular team role relative to be assisted by large sample sizes. In light of these features
another person, for instance, and for this reason inter- we suggest that normative statistical approaches can be
individual comparisons should not be undertaken. usefully applied to TRSPI data for certain analyses (see
Problems associated with analysing and interpreting Hicks, 1970). While the instrument’s overall factor
ipsative data have long been recognized. The fundamental structure should not be examined from TRSPI data alone,
problem is that scores given by respondents to items retain it seems clear that scale items can be investigated for
a level of interdependence and are not independent as is the reliability along with the factor structure of individual
case in a normative instrument. Because of the average scales.
negative correlations that arise, conventional factor analy-
tic approaches are discouraged (e.g., see Dunlap &
Previous Psychometric Evaluation
Cornwell, 1994, for a review). Dunlap and Cornwell
deduced that principal components analysis ‘‘will produce The earliest attempts at evaluation were marked by small
artifactual bipolar factors that result, not from the true sample sizes. This led to researchers assigning a ‘‘score’’ of
underlying relationships between the variables, but from zero to the items that respondents did not distribute points
relationships induced solely by the ipsative nature of the to in order to produce scales with seven scored items for
measure’’ (p. 122). The risk of identifying an incorrect analysis, even though a large proportion of the items were
factor structure instead of the true relationships is too ‘‘scored’’ in this way. We have reservations about the
great. Problems also arise when computing reliability appropriateness of this method because, as most items are
estimates as item scores are not measures of the construct unscored, the covariance matrices on which reliability
that they are intended to measure plus error, as Classical estimates were based relied heavily on ‘‘scores’’ (zeros) not
Test Theory requires, but ‘‘as a response to an item within provided by respondents. Reliability estimates obtained
the context of an item set and the properties of the other following this procedure were low (Broucek & Randell,
items therein’’ (Meade, 2004, p. 537). 1996; Furnham et al., 1993a).
The TRSPI, as a forced-choice instrument, is open While the initial studies were quite critical of the TRSPI,
therefore to the issues outlined above. However, it is clear more recent work using large data sets and which utilizes
that the way forced-choice instruments are designed does only the items scored by respondents has indicated that
influence the scope available to analyse them. The most scale reliability is much better than previously estimated
extreme case occurs where an instrument represents two (Swailes & McIntyre-Bhatty, 2002). Theoretically, the
factors with two items in each item set where only one item scales are unidimensional and, as internal-consistency
can be scored. Saville and Willson (1991) showed that reliability estimates assume unidimensionality it is impor-
where the number of factors is large then the correlation tant to understand more about the structures of the scales
between normative and ipsative scale scores is high and in the instrument. A review of research into the nature of
that reliability estimates are not overestimated. Increasing the TRSPI and the relationships between team roles and
the number of factors decreases interdependence among other variables suggested that discriminant validity be-
them. tween the nine roles was lacking (Aritzeta et al., in press).
The grouping of items also influences the level of Previous work on the factor structure of the inventory
ipsativity. The most desirable situation is where an item has demonstrated bipolar structures (Beck, Fisch, &
from one scale occurs with an item from all other scales in Bergander, 1999; Dulewicz, 1995; Furnham et al., 1993a;
an item set. It is also desirable that traits are measured with Senior, 1998) whether using the TRSPI or personality
the same number of items (Meade, 2004, p. 538). Large measures to construct team roles and some authors have
item sets will also help to reduce interdependence. The explained factor bidimensionality in terms of the ipsative
TRSPI assesses nine team roles using seven items per role. format of the TRSPI. However, acknowledging that such
The items are divided into nine item sets with one item per an effect is present to some extent, it is necessary to explore
role in each set and so scores given to items in one set are further whether the internal structure of the TRSPI’s scales
not dependent on the scores given to any other set. In could be a reason for the lack of discriminant validity and
addition, scores given to the seven items within a scale are whether their structure can explain factor structures found
not dependent upon each other although they are by previous authors. If scale structures are incoherent or
influenced by the scores given to other scale items. depart strongly from unidimensionality then this could be
The TRSPI produces a ranking of a person’s preferences an explanation for weak discrimination between team
towards nine team roles with the highest ranked (natural) roles.

r 2006 The Authors


Journal compilation r Blackwell Publishing Ltd. 2006 Volume 14 Number 3 September 2006
294 STEPHEN SWAILES AND AITOR ARITZETA

Only one previous study has looked at this issue and for the discrepancy function, C, is an indicator that there is
confirmatory factor analysis revealed that five scales no significant difference between the data and the model
showed very good fit to a unidimensional structure as and is thus an indicator of very good fit. Small differences
indicated by p values above .05 (Swailes & McIntyre- between the model and the data will however yield
Bhatty, 2003). These scales were Coordinator, Monitor significant p values for C when sample sizes become large
Evaluator, Plant, Specialist, and Teamworker. The other (typically over 100) and so other fit indices have been
four scales showed less good fit: Completer Finisher C/ developed. We used the ratio of C to the number of degrees
df 5 1.86, comparative fit index (CFI) 5 .91; Implementer of freedom in the model (C/df) which is ideally below three
C/df 5 3.32, CFI 5 .85; Resource Investigator C/df 5 1.80, (Medsker et al., 1994), the root mean square error of
CFI 5 .95 and Shaper C/df 5 2.63, CFI 5 .95. In addition, approximation (RMSEA) which below .08 represents
there were indications that the Completer Finisher, ‘‘reasonable’’ fit and below .05 represent ‘‘close’’ fit (Browne
Implementer and Shaper scales were bi-dimensional and & Cudeck, 1993), the lower and upper limits of a 90%
the sample sizes were less than 100 for two roles and confidence interval on the population value of RMSEA and
between 100 and 200 for five roles. This paper presents a a p value (PCLOSE) for testing the null hypothesis that
more detailed analysis of scale structures on a larger sample RMSEA is no greater than .05. Also reported are the
in an attempt to provide a more exhaustive exploration of normed fit index (Bentler & Bonnett, 1980) which if less
the instrument’s properties and the stability of its scales. than .9 usually indicates substantial scope for improvement,
the incremental fit index (Bollen, 1989) which when close to
unity indicates very good fit, the goodness of fit index which
Method also approaches unity indicating very good fit and the CFI
which is 1.0 at perfect fit and which is ideally .95 or above
Sample
(Hu & Bentler, 1999). Reliability was assessed using a
The study used data from 14,311 respondents to the formula for composite reliability (Bagozzi, 1994) along with
English version of the nine-role TRSPI. The dataset was Cronbach’s a for comparison.
provided by the test publisher and respondents are drawn
from a wide range of occupations, management roles and
seniority. Forty per cent of respondents were female. Results
Thirty-seven items had a modal score of two and 26 had a
Data Analysis modal score of one. Mean item scores ranged from 1.58 to
Data were analysed with AMOS version 5.0 and exploration 2.69. The item scored the least (a Monitor Evaluator item)
of the data suggested that asymptotically distribution-free was scored 3149 times and the item scored most often
(ADF) estimation should be used in light of the sharp (a Teamworker item) was scored 11,498 times. Across the
departures from normality observed for most variables. This whole data set the numbers of respondents that had scored
was confirmed by running 1000 bootstrap samples to all seven items in a scale ranged from 257 for the Plant scale
compare estimation criteria: the smallest mean discrepancy to 1229 for the Shaper scale.
was obtained with ADF estimation. One-factor models in
which all seven items loaded onto a single factor were fitted Single Factor Models
to the data. Subsequently, the specification search facility in
AMOS was used to examine the fit of all possible two-factor For the seven-item scales, reasonable fit as judged through
structures. In this approach, any measured variable (in this non-significant p values for C was obtained for Completer
case the scored items) can depend upon any factor Finisher, Co-ordinator, Monitor Evaluator and Teamwor-
(Arbuckle, 2003). With seven items loading onto two factors ker. Using C/df and RMSEA as a guide, Plant, Resource
there are potentially 16,284 (214) possible models. Most of Investigator and Shaper also fitted reasonably well although
these models are discarded as unidentified or as inadmissible Resource Investigator items 5 and 6 had non-significant
or because of poor fit indices. To help choose from the loadings (p4.05) onto the latent factor. Implementer and
surviving two-factor models, scree plots and best-fit plots Specialist scale data appeared to fit least well and for the
were examined for each of the model fitting attempts and Specialist scale item five also had a non-significant loading.
these suggested evaluation of models with 15 parameters. a’s were .7 or above for the Co-ordinator, Plant, Resource
Investigator, Shaper and Teamworker scales.

Assessing Fit
Two-Factor Models
There are many ways of assessing the fit of structural models
although much judgement remains in reaching decisions All two-factor models showed improved fit over one-factor
about how well data fit a model – see Medsker, Williams, models with all but the Implementer, Shaper and Specialist
and Holahan (1994) for a review. A non-significant p value scales yielding non-significant p values for C. However, the

r 2006 The Authors


International Journal of Selection and Assessment Journal compilation r Blackwell Publishing Ltd. 2006
SCALE PROPERTIES OF THE TRSPI 295

Implementer scale showed other good indicators (CFI .91, remaining five item factors (four items in the case of
RMSEA .03). The intercorrelation, r, between the two Resource Investigator) was .7 or above for five scales and
factors in each scale was mostly moderate to strong. The .69 for Shaper, .61 for Implementer and Specialist, and .58
lowest intercorrelation, .31, was for Completer Finisher for Monitor Evaluator.
whereas others ranged from .41 (Implementer) to .89
(Resource Investigator).
Parsimonious One-Factor Scales
Factor loadings revealed a consistent pattern such that
for Completer Finisher, Co-ordinator, Implementer, Shaper In light of the weak properties shown by the small, two-
and Teamworker, the second and seventh items in each item factors further tests were conducted to find more
scale loaded onto the second factor. Two-item second parsimonious scales. With the small factors discarded, five-
factors were also observed for Monitor Evaluator (items 2 item unidimensional models were tested and this produced
and 4), Plant (items 5 and 7) and Specialist (items 2 and 5). improvements for all scales although Implementer, Moni-
The only exception to this pattern, Resource Investigator, tor Evaluator and Specialist continued to show poor
split into a four-item factor (items 1, 2, 3, 7) and a three- reliability (a and composite reliability less than about .7).
item factor for the best fitting model. The composite Although the data from the Resource Investigator scale
reliability of the smaller factors, however, was low and split into a four-item and a three-item factor, the fit of a
ranged from .35 to .59 except for Resource Investigator five-item model obtained by dropping items 5 and 6 was
(.69) and Shaper (.72). Composite reliability of the still good (C 5.9, df 5, p .31, a .73) (Table 1).

Table 1. Fit indices for team role scales


Team role
and model C p C/df GFI NFI IFI CFI RMSEA RMSEA L–H PCLOSE r a cr n
CF 1F7 17.7 .22 1.26 .98 .77 .94 .93 .024 0–.040 .920 .63 .67 464
CF 2F 14.7 .33 1.13 .98 .81 .97 .97 .016 0–.05 .947 .31
CF 1F5 8.9 .11 1.77 .99 .85 .93 .92 .041 0–.084 .575 .70 .72
CO 1F7 22.5 .07 1.61 .96 .69 .85 .83 .040 0–.069 .682 .70 .74 385
CO 2F 15.4 .28 1.19 .97 .79 .96 .95 .022 0–.058 .51
CO 1F5 8.5 .13 1.17 .98 .85 .93 .92 .043 0–.091 .530 .73 .79
IMP 1F7 62.6 .000 4.47 .95 .61 .67 .65 .059 .044–.074 .153 .57 .57 1010
IMP 2F 25.1 .022 1.94 .98 .84 .92 .91 .030 .011–.048 .967 .41
IMP 1F5 16.5 .006 3.3 .98 .84 .89 .88 .048 .023–.074 .510 .58 .61
ME IF7 22.4 .07 1.6 .97 .66 .84 .81 .041 0–.071 .655 .64 .66 363
ME 2F 11.8 .55 .90 .98 .82 1.02 1.0 .000 0–.048 .960 .67
ME 1F5 4.5 .48 .90 .99 .91 1.01 1.0 .000 0–.069 .834 .57 .58
PL 1F7 27.5 .016 1.97 .93 .69 .82 .80 .061 .026–.095 .260 .78 .78 257
PL 2F 19.8 .10 1.52 .95 .78 .91 .90 .045 0–.083 .537 .70
PL 1F5 10.2 .07 2.05 .97 .84 .91 .90 .064 0–.120 .284 .76 .75
RI IF7 25.6 .029 1.83 .95 .49 .68 .60 .049 .015–.078 .484 .78 .48 347
RI 2F 13.1 .11 1.64 .97 .66 .83 .78 .043 0–.083 .560 .89
RI 1F4 5.1 .08 2.53 .96 .67 .77 .68 .067 0–.141 .265 .69 .70
SH IF7 57.1 .000 4.1 .96 .75 .80 .79 .050 .037–.064 .470 .71 .70 1229
SH 2F 38.0 .000 2.9 .98 .83 .88 .88 .040 .025–.054 .868 .72
SH 1F5 15.8 .007 3.1 .99 .90 .93 .93 .042 .020–.066 .676 .69 .69
SP 1F7 44.0 .000 3.1 .90 .53 .62 .59 .090 .060–.120 .015 .61 .63 268
SP 2F 27.0 .013 2.08 .94 .71 .83 .81 .063 .028–.097 .229 .44
SP 1F5 6.5 .26 1.3 .98 .88 .97 .97 .034 0–.090 .585 .59 .61
TW IF7 11.9 .61 .85 .99 .92 1.01 1.0 .0 0–.034 .997 .71 .72 612
TW 2F 8.7 .80 .67 .99 .94 1.03 1.0 .0 0–.026 .999 .78
TW 1F5 4.0 .54 .81 .99 .97 1.01 1.0 .0 0–.058 .948 .71 .72

Notes: CF 1F7, seven item, one-factor model for Completer Finisher scale; CF 2F, seven-item, two-factor model; CF
1F5, five item, one-factor model; a, Cronbach’s a, r, factor intercorrelation for the two-factor models; cr, composite
reliability.RMSEA L–H gives the lower and higher limits of a 90% confidence interval on the population value of RMSEA.

r 2006 The Authors


Journal compilation r Blackwell Publishing Ltd. 2006 Volume 14 Number 3 September 2006
296 STEPHEN SWAILES AND AITOR ARITZETA

Discussion Investigator, appear together. The Specialist role did not


show a clear association with either of these clusters. The
These results give a much clearer insight into the nature of present results support previous studies (Swailes &
the TRSPI’s scales than was available previously. The McIntyre-Bhatty, 2002, 2003) in showing that the Specia-
correspondence between items and scales and the wording list and Implementer scales have the weakest properties and
of individual scale items were explicit in the original eight- this may explain why the Specialist role does not appear to
role version but are masked in the computerized nine-role associate with other roles in cross-validation studies. This
version. However, we were able to match our factor results explanation, however, does not hold for the Implementer
to item wording for each scale in order to explore the factor role which does appear to associate with other roles. The
structures obtained. One general explanation for better alternative explanation that the Specialist role is, by
fitting bi-dimensional structures is the use in most scales of definition, a relatively independent role and as such would
some negatively worded items or item wording that has not be expected to correlate with other roles cannot be
negative connotations in contrast to a majority of unequi- discounted.
vocally positively worded items.
Within the Co-ordinator scale, factor 1 loaded positive
statements about getting agreement whereas factor 2 Conclusions
loaded items containing ‘‘not’’ or ‘‘cannot’’. Implementer
factor 1 loaded positive statements about making things This paper adds to the literature on the TRSPI as it is one of
happen, factor 2 loaded items containing, ‘‘I am not at ease very few studies that looks into the content of the nine
. . .’’ or ‘‘I find it difficult to . . .’’. Monitor Evaluator factor scales with a sample large enough to guarantee sufficiently
1 loaded items reflecting rational approaches to situations high variability. Most papers that evaluate the TRSPI, in
whereas factor 2 loaded items containing the wording ‘‘. . . contrast, look at relationships between the roles or between
makes it difficult for me . . .’’ and ‘‘. . . to refute unsound the roles and other constructs. The broadly consistent
propositions . . .’’, the second of which is akin to double- factor structures observed suggest that our initial premise
negative wording. Plant factor 1 loaded imagining and about the levels of ipsativity being small, at least among
innovating and factor 2 loaded an item with, ‘‘I am scores that lead to the computation of preferred/natural
sometimes poor at . . .’’, i.e., wording with a negative team roles, is upheld such that meaningful analysis of
connotation. intrascale properties is possible. If levels of interdepen-
A second explanation for better fitting two-factor dency among the data had been high then it is unlikely that
structures is that item wording for some scales appears to the consistent two factor results would have been seen.
tap slightly different content areas. Completer Finisher By virtue of the computerized format of the nine-role
factor 1 loaded diligence and urgency whereas factor 2 TRSPI users have to respond to all 70 items and the issues
loaded negative connotations about communicating with identified above cannot easily be overcome simply by
others. Resource Investigator factor 1 loaded creating ideas avoiding certain items. The implications of this for users
whereas factor 2 captured items about making new depend upon the use being made, however. First, for team
contacts. Shaper factor 1 loaded self-assertion whereas development situations where the purpose is to stimulate
factor 2 loaded dominating behaviour. Specialist factor 1 discussions about team roles and team development in a
loaded sensitivity to contributing to discussions and factor management development context the instrument can
2 loaded items about having and using specialist knowl- continue to play a valuable role bearing in mind that the
edge. Teamworker factor 1 is concerned with creating Observer Assessment Sheet should also be used in such
relationships and factor 2 is concerned with sensitivity to instances. Second, the TRSPI operationalises the team role
others; factor 2 items also have a negative wording balance hypothesis (Belbin, 1993) which in essence states
connotation. that high performing teams require a balance of team roles
These properties help to explain the findings of a recent to be present. Where organizations use the TRSPI to assign
review of empirical studies that have used the TRSPI that roles to individuals as a forerunner to team formation then
showed that team roles could be classified as opposing two- this study suggests that extra care is needed over Specialist
by-two pairings or in two well-differentiated groups and Implementer role assignment.
(Aritzeta et al., in press). In the two-by-two classifications Third, where the TRSPI is being used to rank team roles,
Plant contrasted with Team Worker and Implementer, perhaps for association with other variables in a research
and, Shaper contrasted with Teamworker and Monitor study, users need to be more cautious as the various scales
Evaluator. Less clear were distinctions between Co- show a range of properties and varying degrees of potential
ordinator and Specialist, Monitor Evaluator and Resource for improvement. On the basis of reliability estimates we
Investigator. Two broad clusters were also detected. One is suggest that, as the instrument stands, the ‘‘bottom line’’ is
formed by Teamworker, Implementer and Completer that the majority of the scales have at least adequate
Finisher, and in part Monitor Evaluator. In the second properties but that the Specialist and Implementer scales
cluster, Shaper and Plant, and in some respects Resource appear to able to benefit from further development. Until

r 2006 The Authors


International Journal of Selection and Assessment Journal compilation r Blackwell Publishing Ltd. 2006
SCALE PROPERTIES OF THE TRSPI 297

this takes place, research studies should treat findings Journal of Occupational and Organizational Psychology, 68,
relating to the Specialist and Implementer roles with 81–99.
Dunlap, W.P. and Cornwell, J.M. (1994) Factor analysis of ipsative
particular caution. measures. Multivariate Behavioural Research, 29, 115–126.
This paper helps to form a more complete picture of the Furnham, A., Steele, H. and Pendleton, D. (1993a) A psychometric
TRSPI and its properties. The results of this study are assessment of the Belbin Team-Role Self-Perception Inventory.
somewhat mixed in that while they do not demonstrate Journal of Occupational and Organizational Psychology, 66,
nine robust scales it is clear that a majority of the scales 245–257.
Furnham, A., Steele, H. and Pendleton, D. (1993b) A response to Dr
show acceptable properties.
Belbin’s reply. Journal of Occupational and Organizational
Psychology, 66, 261.
Hicks, L.E. (1970) Some properties of ipsative, normative and
References forced-choice normative measures. Psychological Bulletin, 74,
167–184.
Arbuckle, J.L. (2003) Amos 5.0 update to the Amos user’s guide. Hu, L. and Bentler, P.M. (1999) Cutoff criteria for fit indices in
Chicago: SmallWaters Corporation. covariance structure analysis: Conventional criteria versus new
Aritzeta, A., Swailes, S. and Senior, B. (in press) The team role self- alternatives. Structural Equation Modelling, 6, 1–55.
perception inventory: Development, validity and applications Margerison, C. and McCann, D. (1990) Team management.
for team building. Journal of Management Studies. London: W.H. Allen.
Bagozzi, R.P. (1994) Structural equation models in marketing Meade, A.W. (2004) Psychometric problems and issues involved
research: Basic principles. In R.P. Bagozzi (Ed.), Principles of with creating and using ipsative measures for selection. Journal
marketing research (pp. 317–385). Oxford: Blackwell. of Occupational and Organizational Psychology, 77, 531–552.
Beck, D., Fisch, R. and Bergander, W. (1999) Functional roles in Medsker, G.J., Williams, L.J. and Holahan, P.J. (1994) A review of
work groups – an empirical approach to the study of group role current practices for evaluating causal models in organizational
diversity. Psychologische Beiträge, 41, 288–307. behaviour and human resources management research. Journal
Belbin, M. (1981) Management teams, why they succeed or fail. of Management, 20, 439–464.
London: Heinemann. O’Doherty, D.M. (2005) Working as part of balanced team. Inter-
Belbin, M. (1993) Team roles at work. Oxford: Butterworth- national Journal of Engineering Education, 21, 113–120.
Heinemann. Parker, G.M. (1990) Team players and teamwork: The new
Belbin, M., Aston, R. and Mottram, D. (1976) Building effective competitive business strategy. Oxford: Josey-Bass.
management teams. Journal of General Management, 3, 23–29. Saville, P. and Willson, E. (1991) The reliability and validity of
Bentler, P.M. and Bonnett, D.G. (1980) Significance tests and normative and ipsative approaches in the measurement of
goodness of fit in the analysis of covariance structures. personality. Journal of Occupational Psychology, 64, 219–238.
Psychological Bulletin, 88, 588–606. Senior, B. (1997) Team roles and team performance: Is there ‘really’
Bollen, K.A. (1989) A new incremental fit index for general a link? Journal of Occupational and Organizational Psychology,
structural equation models. Sociological Methods and Research, 70, 241–258.
17, 303–316. Senior, B. (1998) An empirically-based assessment of Belbin’s team
Broucek, W.G. and Randell, G. (1996) An assessment of the roles. Human Resource Management Journal, 8, 54–60.
construct validity of the Belbin Self-Perception Inventory and Senior, B. and Swailes, S. (1998) A comparison of the Belbin self
Observer’s Assessment from the perspective of the five-factor perception inventory and observer’s assessment sheet as mea-
model. Journal of Occupational and Organizational Psychology, sures of an individual’s team roles. International Journal of
69, 389–405. Selection and Assessment, 6, 1–8.
Browne, M.W. and Cudeck, R. (1993) Alternative ways of assessing Spencer, J. and Pruss, A. (1992) Managing your team. London:
model fit. In K.A. Bollen and J.S. Long (Eds), Testing structural Piatkus.
equation models. Newbury Park, CA: Sage. Swailes, S. and McIntyre-Bhatty, T. (2002) The ‘‘Belbin’’ team role
Davis, J., Millburn, P., Murphy, T. and Woodhouse, M. (1992) inventory: Reinterpreting reliability estimates. Journal of Man-
Successful team building: How to create teams that really work. agerial Psychology, 17, 529–536.
London: Kogan Page. Swailes, S. and McIntyre-Bhatty, T. (2003) Scale structure of the
Dulewicz, V. (1995) A validation of Belbin’s team roles from team role self perception inventory. Journal of Occupational and
16PF and OPQ using bosses’ ratings of competence. Organizational Psychology, 76, 525–529.

r 2006 The Authors


Journal compilation r Blackwell Publishing Ltd. 2006 Volume 14 Number 3 September 2006
298

Appendix A
Table A1. Team role descriptors, strengths and allowed weaknesses
Team role Descriptors Strengths Allowed weaknesses
Completer finisher (CF) Anxious, conscientious, introvert, Painstaking, conscientious, searches out errors Inclined to worry unduly. Reluctant
self-controlled, self-disciplined, and omissions, delivers on time to delegate
submissive and worrisome
Implementer (IMP) Conservative, controlled, disciplined, efficient, Disciplined, reliable, conservative and Somewhat inflexible. Slow to
inflexible, methodical, sincere, stable and efficient, turns ideas into practical actions respond to new possibilities
systematic
Teamworker (TW) Extrovert, likeable, loyal, stable, submissive, Co-operative, mild, perceptive and diplomatic, Indecisive in crunch situations
supportive, unassertive, and uncompetitive listens, builds, averts friction, calms the waters
Specialist (SP) Expert, defendant, not interested in others, Single-minded, self-starting, dedicated; Contributes on a narrow front only.

International Journal of Selection and Assessment


serious, self-disciplined, efficient provides knowledge and skills in rare supply Dwells on technicalities
Monitor Evaluator (ME) Dependable, fair-minded, introverted, low Sober, strategic and discerning, sees all Lacks drive and ability to inspire
drive, open to change, serious, stable and options, judges accurately others
unambitious
Co-ordinator (CO) Dominant, trusting, extrovert, mature, positive, Mature, confident, a good chairperson, clarifies Can be seen as manipulative.
self-controlled, self-disciplined and stable goals, promotes decision making, delegates Offloads personal work
well
Plant (PL) Dominant, imaginative, introvert, original, Creative, unorthodox, solves difficult problems Too preoccupied to communicate
radical-minded, trustful and uninhibited effectively
Shaper (SH) Abrasive, anxious, arrogant, competitive, Challenging, dynamic, thrives on pressure, has Prone to provocation. Offends
dominant, edgy, emotional, extrovert, drive and courage to overcome obstacles people’s feelings
impatient, impulsive, outgoing and
self-confident
Resource Investigator (RI) Diplomatic, dominant, enthusiastic, extrovert, Extrovert, communicative, explores Over-optimistic, Loses interest
flexible, inquisitive, optimistic, persuasive, opportunities, develops contacts after initial enthusiasm
positive, relaxed, social and stable

Source: Belbin (1993, p. 22).

r 2006 The Authors


Journal compilation r Blackwell Publishing Ltd. 2006
STEPHEN SWAILES AND AITOR ARITZETA

Vous aimerez peut-être aussi