Vous êtes sur la page 1sur 6

International Journal of Selection and Assessment

Volume 16 Number 3 September 2008

HR Professionals Beliefs About,


and Knowledge of, Assessment
Techniques and Psychometric
Tests
Adrian Furnham
Department of Psychology, University College London, London, UK. a.furnham@ucl.ac.uk

In all 255 adult professionals concerned with selection, assessment and training completed a questionnaire which asked their beliefs about the validity, cost, practicality and
legality of different assessment techniques (i.e., Assessment Centres, Biodata, Interviews) and their knowledge and use of both personality and ability tests. Participants
tended to be positive about the tests themselves, how they were used and about test
publishers. They rated Assessment Centres, Cognitive ability tests and Work Samples as
the most valid, while Interviews were rated as most practical. Results from knowledge of
personality and intelligence tests indicated that only a few tests were widely known, more
so in personality/motivation than intelligence. Implications of these results for educating
and informing practitioners are considered.

1. Introduction

sychometric tests of ability and personality have


long been used in clinical, educational, industrial and
organisational settings to facilitate decision making
(Anderson & Cunningham-Snell, 2000; Bartram, 2004;
Jeanneret & Silzer, 2000; Hambleton & Oakland, 2004;
Klehe, 2004; Oakland, 2004; Ones & Anderson, 2002;
Ones & Viswesvaran, 1998). Various recent reviews
have looked at trends and changes in their use (Kwiatkowski, 2003; Lievens, van Dam, & Anderson, 2002;
Ryan & Sackett, 1988; Silzer & Jeanneret, 2000; Te
Nijenhuis, Voskuijl, & Schijive, 2001); the use of new
technologies (Chapman & Webster, 2003) as well as
how applicants view these procedures (Hausknecht,
Day, & Thomas, 2004).
There have been various studies on practitioners
perceptions and uses of tests (Brown, 1999; Ryan &
Sackett, 1988). For instance a British study (Hodgkinson, Daley, & Payne, 1995) of 176 UK employees
showed the following rank order from most (always)
to least (never) used methods: Interview, References,

Application Form, Ability Test, Personality Test, Assessment Centre, Structured Interview, Biodata. A similar
American Study (Rynes, Orlitzky, & Bretz, 1997) of 251
employers however showed a rather different pattern:
References, Structured Interview, Drug Test, School/
University Grades, Interview, Work Trial, Work Sample, Ability Test, Personality Test, Assessment Centre,
Biodata. Overall however an unstructured interview,
references and some application form data seem to be
collected for nearly every selection task (Cook, 2004).
Paradoxically they have been shown to be some of the
least valid ways to assess people. The American Management Association published an important review in
2001 which showed just under a third used tests for
various purposes.
This study attempted to look at HR practitioners
beliefs about, knowledge and use of both psychometric
power (ability) and preference (personality) tests. It
was a survey of a large British population of HR test
users. It hoped to give a representative overview of the
beliefs and practices of mainly Human Resource Practitioners working in an average to large sized British

& 2008 The Author. Journal compilation & 2008 Blackwell Publishing Ltd,
9600 Garsington Road, Oxford, OX4 2DQ, UK and 350 Main St., Malden, MA, 02148, USA

Assessment Techniques and Psychometric Tests


company and who used tests mainly for assessment,
selection and development purposes.

2. Method
2.1. Participants
A total of 255 participants responded to invitations to
complete the survey either as a paper survey or on the
internet. The vast majority (88%) were British-based
HR practitioners in large companies employing over
250 people. The average age was 40.23 yrs
(SD 11.37). The sample consisted of 48.4% males
and 51.6% females. One hundred and forty eight
respondents had Level A Certificates (or equivalent)
with many of these having Level B as well. 81% of the
sample was from the United Kingdom with the remainder
from a variety of mainly European countries. To be
included in the survey respondents had to be responsible
for, and/or regularly involved in personnel selection.

2.2. Survey
This was divided essentially into two parts:
The first section itself was divided into two parts. The
first part was a grid. It listed 12 methods to assess
people including all the most well-known and well-used
methods (assessment centres, interview, references).
Respondents were required to rate each on four
criteria that seemed most appropriate for practitioners.
This section asked people simply to reveal their experience and qualifications for using various types of
tests.
The second section was also divided into two sections. The first section consisted of 21 personality and
motivational tests commonly used by consultants, HR
specialists and organisational psychologists in selection
and assessment. The list was derived from 20 Interviews with suppliers and users and past surveys on test
use. It aimed to be comprehensive reflecting test usage
in Great Britain. Inevitably some less well-known and
used tests were not included. The second section listed
19 aptitude, ability and intelligence tests. The same
procedure was used to decide which should or should
not be included. This part of the questionnaire also
provided space for respondents to list the names of
tests that were not mentioned above. Each of both
types of tests were rated on six dimensions, three Yes/
No: Have you heard of this test? Have you completed
this test? Does your organisation use this test? Three
ratings were done on a 10-point scale (1 low,
10 high). How valid do you rate this test? How useful
is this test for selection? How useful is this test for
development?

& 2008 The Author


Journal compilation & 2008 Blackwell Publishing Ltd

301

2.3. Administration
The questionnaire was sent out in NovemberDecember 2004. Responses arrived soon after but the last
questionnaire was included in March 2005. Questionnaires were sent both by post and electronically (internet) depending on how easy it was to contact
individuals. Some respondents replied by post, some
downloaded and completed a paper questionnaire, and
some responded via the internet.
Individuals were contacted essentially via two means.
Firstly, through a consortium specialising in HR practice
and secondly through a network of consultants, academics and test publishers. They were asked to give it
to colleagues where they thought they were appropriate. Hence it is difficult to specify the response rate.
It is estimated 500600 questionnaires were targeted at
specific people. Over 220 were returned which gives a
response rate between 30% and 40%.

3. Results and discussion


MANOVAs were computed over data in the three
tables comparing the response of the majority participants from Britain and the remainder from the rest of
the world combined. None of the three showed
significant differences hence they were combined.
1. Results from rating different techniques are shown
in Table 1.
Overall respondents tended to be very positive
about tests. They tended to endorse all the statements
that suggested tests were efficient and effective at
assessment and reject all those suggesting the opposite.
Interestingly they had divided opinions about the cost
of the tests.
A. Validity: Three techniques were thought to have high
validity: Assessment Centres, Cognitive Ability Tests
and Work Sample. Least valid were judged to be:
Personal Hunch, References and Biodata.
B. Cost: The techniques could be categorised into high,
middle and low cost: High Assessment Centre,
Personality Tests: Medium 3601 Appraisal Data,
Cognitive Ability Tests, Work Sample, Interview,
Biodata: Low Personal Hunch, Educational Qualifications, References, Job Knowledge, Peer Rating.
C. Practicality: There was little range in these ratings.
The interview was judged the most practical and
3601 data the least.
D. Legality: Once again there was relatively little variability with Assessment Centres and Interviews getting the highest legality rating and peer ratings,
biodata (of course) personal hunch the lowest.
If one were to take these four criteria as important
and sum them (unweighted) the rank order is as

International Journal of Selection and Assessment


Volume 16 Number 3 September 2008

302

Adrian Furnham

Table 1. Rating of 12 techniques by four criteria means and standard deviations


Technique

1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.

Interview
Reference
Peer ratings
Biodata
Cognitive Ability Tests
Personality Tests
Assessment Centres
Work sample
Job Knowledge
Educational Qualifications
3601 Appraisal Data
Personal hunch

Criteria*
A. Validity

B. Cost

C. Practicality

D. Legality

3.11
2.23
3.08
2.80
3.90
3.55
4.03
3.90
3.65
3.13
3.56
1.83

2.99
1.71
2.39
2.58
3.41
3.56
4.42
3.07
2.27
1.64
3.46
1.39

3.83
3.37
2.74
2.94
3.20
3.25
2.71
3.00
3.49
3.69
2.73
3.03

3.61
2.95
2.56
2.76
3.37
3.23
3.70
3.51
3.47
3.43
3.03
1.53

(1.04)
(1.06)
(.98)
(.99)
(.81)
(.82)
(.88)
(.86)
(.87)
(.95)
(.91)
(1.02)

(1.06)
(.97)
(1.03)
(1.18)
(.95)
(.91)
(.95)
(1.01)
(.99)
(.97)
(1.07)
(.93)

A great deal

What is your level of experience in using Ability Tests?


What is your level of experience in using Personality Tests?
What qualifications do you have that are relevant to testing?

(.86)
(1.12)
(1.01)
(1.12)
(.86)
(.85)
(1.07)
(1.02)
(.93)
(1.11)
(1.08)
(1.61)

(1.02)
(1.16)
(1.10)
(1.17)
(.97)
(.96)
(.95)
(1.07)
(1.07)
(1.19)
(1.05)
(1.00)

Very little

6 (%)

5 (%)

4 (%)

3 (%)

2 (%)

1 (%)

SD

9.2
6.8
18.7

12.4
10.01
7.1

12.0
11.2
10.6

23.6
18.2
17.4

20.0
22.9
22.0

22.8
30.5
24.9

4.04
4.33
3.93

1.57
1.56
1.81

*1 Not at all 5 Very.

follows: Assessment Centres 14.86%; Cognitive Ability


Tests 13.88%; Personality Tests 13.59%; Interview
13.54%; Work Sample 13.44%; Job Knowledge 12.8%;
3601 Appraisal Data 12.78%; Educational Qualifications
11.89%; Biodata 11.0%; Peer Ratings 10.77%; References 9.65%; Personal Hunch 7.78%.
A series of correlates were then computed between
the four ratings for each method. They varied considerably but tended to be low and positive and around
r .20. Collapsing across the 12 methods correlations
were then computed. The lowest was between cost
and practicality (r .13) and the highest practicality and
legality (r .43). Validity ratings correlated r .24 with
cost, r .37 with practicality and r .39 with legality.
The final correlation between cost and legality was
r .27. This shows that participants were showing
reasonable discriminant validity in their ratings.
Next two analyses were done to see if these
correlations were moderated by individual difference
variables namely gender, organisation and the size of the
organisation participants belong to. Partial correlations,
partialling each variable in turn and then in combination,
suggested no evidence of moderation. Next, six correlations were computed (as above) for each individual
and these averaged correlations themselves correlated
with the three demographic variables under considerations. None however were significant.
Four observations could be made about these results: Overall the rank order reflects the scientific data
on the usefulness and validity of these different techniques. Validity and Cost seemed quite closely related

International Journal of Selection and Assessment


Volume 16 Number 3 September 2008

and showed numerous differences while practicality and


legality did not differentiate as much as the former two
ratings. It was not clear why references were so
differently rated than peer rating and 3601 appraisal
data which are all based on observer ratings. Perhaps
they differ most in their response mode and structure.
That is, traditional References are unstructured and
written while 3601 ratings are highly structured and
verbal. Personality and Ability tests get very similar
scores despite the fact that the literature suggests the
latter better than the former.
2. Results from section in test knowledge are shown
in Tables 2 and 3.
Section three contained two parts. The first listed 21
personality-type tests, the second cognitive ability tests.
Each required that people rated each on six questions
(3  yes/no; 3  10 point scale).

3.1. Personality tests


In all 50% (only) of the respondents had claimed to have
heard about eight of these tests. Thus, according to
their fame (for this sample) the following had at least
been heard about: 16 PF; MyersBriggs; OPQ; Belbin
Team Role; EPI/Q/P; Big Five (NEO-PI-R); FIRO; OSI.
Three had not even heard of by a fifth (o20%) of the
respondents (PRISM, PASAT, Orpheus). More than 50%
of the respondents had taken only four of these tests:
MBTI, 16PF, Belbin, OPQ. For over half of the tests (11
in fact) less than a fifth (o20%) had themselves taken

& 2008 The Author


Journal compilation & 2008 Blackwell Publishing Ltd

Assessment Techniques and Psychometric Tests

303

Table 2. Six ratings on 21 personality/motivation tests


Personality/Motivation Tests

1. 16 Personality Factor Questionnaire


(16PF)
2. Bar-On Emotional Quotient Inventory
(EQ-i)
3. Belbin Team Role Inventory
4. Big Five (NEO-PI-R)
5. California Personality Inventory (CPI)
6. Corporate Culture Questionnaire
7. Eysenck Personality Tests (EPI) (EPQ)
(EPP)
8. Fundamental Interpersonal Relations
Orientation Behaviour (FIRO-B)
9. Hogan Personality Questionnaires
(HPI, HDS)
10. Kirton Adaptor/Innovator Test (KAI)
11. Motivational Appraisal of Personnel
Potential(MAPP)
12. Motivational Questionnaire
13. MyersBriggs Type Indicator (MBTI)
14. Occupational Personality Questionnaire (OPQ)
15. Occupational Stress Inventory (OSI)
16. Orpheus Personality Test
17. PASAT Sales Personality Test
18. Perception and Preference Inventory
(PAPI)
19. Personal Profile Analysis (PPA)
20. PRISM Team Preferences Questionnaire
21. Self-Directed Search

Have you
heard of
this test?
% Yes

Have you
completed
this test?
% Yes

Does your
organisation
use this
test? % Yes

How valid
do you rate
this test?
110

How useful
is this test
for Selection?
110

How useful
is this test
for Development? 110

85.5

63.9

28.6

6.94 (2.04)

6.16 (2.22)

6.59 (2.20)

40.3

13.7

9.1

4.89 (2.24)

4.12 (2.25)

5.51 (2.41)*

71.8
62.5
46.3
28.2
62.8

59.3
41.5
23.0
9.3
27.1

39.8
22.9
7.5
4.6
7.1

5.49
7.20
6.87
5.29
6.44

3.55
6.37
5.42
3.81
5.25

6.24
6.49
5.71
4.90
5.42

61.3

39.9

31.3

6.35 (2.18)

4.55 (2.50)

6.71 (2.39)

46.4

24.6

15.9

6.37 (2.25)

5.85 (2.41)

6.19 (2.44)

26.3
25.1

8.5
5.7

3.3
3.3

5.12 (2.29)
5.31 (2.28)

4.20 (2.13)
4.98 (2.17)

5.23 (2.18)*
5.23 (2.40)*

40.5
84.2
79.8

19.4
72.2
52.6

11.7
55.6
34.7

6.00 (2.32)
6.92 (2.14)
6.98 (2.01)

5.05 (2.49)
3.82 (2.68)
6.12 (2.19)

6.03 (2.46)*
7.25 (2.37)
6.03 (2.22)

51.4
18.2
19.2
31.3

12.1
2.4
5.3
12.1

6.3
1.7
1.3
2.9

5.59
5.12
5.24
5.08

4.44
4.94
5.15
4.71

5.33
4.69
4.87
5.03

28.3
13.4

10.9
7.3

6.3
1.6

4.87 (2.41)
4.54 (2.36)

5.66 (2.47)
4.35 (2.47)

4.70 (2.36)*
4.58 (2.91)**

38.6

30.6

29.4

4.85 (2.39)

4.31 (2.38)

4.89 (2.47)**

(2.04)
(1.99)
(2.68)
(2.27)
(2.28)

(2.43)
(2.53)
(2.15)
(2.31)

(2.42)
(2.15)
(2.52)
(2.49)
(2.27)

(2.48)
(2.59)
(2.44)
(2.34)

(2.16)
(2.32)
(2.60)*
(2.41)*
(2.34)

(2.50)*
(2.69)**
(2.50)**
(2.46)*

*Indicates where fewer than 100 (**o50) people answered the final three questions. This indicates few people really knew about this test.

the test. Over half the respondents organisations used


the MBTI; a third the OPQ, FIRO and Belbin. Fewer
than 10% used 12 of the tests suggesting a few
frequently used and a number very rarely used.
People only answered the final three questions if they
had heard of or completed the test. Ratings of validity
were from 4.54 to 7.20. In all five were above 6.50 (on a
10 point scale) suggesting moderate to high ratings of
validity. They were in rank order: Big Five, OPQ, 16PF,
MBTI, CPI.
Usefulness ratings had higher standard deviations.
The range was from 3.55 to 6.16. The top five for useful
were thought to be: Big Five, 16PF, OPQ, HPI, PPA.
While the bottom five were: Belbin, Corporate Culture
Questionnaire, MBTI, Bar-On Emotional Quotient and
the KAI. For the last column the range was lower here
indicating that people were much more likely to
differentiate between tests for selection vs development. The top five were: MBTI, FIRO, 16PF, Big Five,
Belbin.
Overall it seemed there was a positive relationship
between the ratings. People had heard about more than

& 2008 The Author


Journal compilation & 2008 Blackwell Publishing Ltd

they had personally completed but those tests seemed


the most well known and used. Results suggest there
remains a handful, roughly four to six tests that are
well-known, well-rated. Overall however it also indicate that a surprisingly large number of the respondents
had not heard of many tests nor did their organisations
use them.

3.2. Aptitudes/ability/intelligence tests


There were fewer tests in this list but overall they
seemed less well known. There was also less discrimination between the tests. The tests that were best
known were: GMA, WatsonGlaser, Critical Reasoning,
Ravens, AMT. However, it should be pointed out that
more than 50% of the respondents only knew about
only two of these tests. There were only six tests which
a third or more of the respondents had themselves
completed: WatsonGlaser, Ravens, GMA, AMT, MGIB,
CCAS. The results indicate that around a quarter to a
third of respondents companies used these tests. The

International Journal of Selection and Assessment


Volume 16 Number 3 September 2008

304

Adrian Furnham

Table 3. Six ratings on 19 ability tests

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19

Aptitude/Cognitive
Ability Tests

Have you
heard of
this test?
% Yes

Have you
completed
this test?
% Yes

Does your
organisation
use this test?
% Yes

How valid
do you rate
this test?
110

How useful
is this test
for Selection?
110

How useful
is this test
for Development? 110

Able Series of Intelligence


Tests
Advanced Managerial
Tests(AMT)
AH4
AH6
Applied Technology
Test Series
Automated Office Battery
Critical Reasoning Test
Battery (CRTB)
Customer Contact
Aptitude Series (CCAS)
General Ability Test
Graduate and Managerial
Assessment (GMA)
Information Technology
Test Series
Management and Graduate
Item Bank (MGIB)
Modern Occupational Skills
Test
NFER Ability Test
Personnel Test Battery
(PTB)
Ravens Progressive Matrices
Technical Test Battery
WatsonGlaser Critical
Thinking Appraisal
Wonderlic Personnel Test

38.3

30.6

29.4

6.41 (2.49)

6.26 (2.67)

4.13 (2.39)

42.2

33.5

30.6

6.56 (2.40)

6.15 (2.62)

4.34 (2.42)

36.4
36.0
27.3

29.6
27.9
23.0

24.5
24.9
24.3

5.83 (2.56)
5.78 (2.57)
5.19 (2.54)

5.44 (2.57)
5.41 (2.59)
4.85 (2.54)

5.51 (2.32)
5.55 (2.25)
3.97 (2.42)*

29.4
45.3

21.4
33.7

12.5
29.5

5.43 (2.72)
6.71 (2.24)

5.20 (2.73)
6.46 (2.37)

4.29 (2.64)*
4.82 (2.51)

29.9

24.3

25.0

5.95 (2.70)

5.63 (2.65)

4.34 (2.28)*

44.7
51.6

25.0
39.9

25.4
35.2

6.58 (2.27)
6.80 (2.30)

6.28 (2.36)
6.62 (2.43)

5.22 (2.45)
4.70 (2.54)

25.8

23.9

24.2

5.87 (2.87)

5.48 (2.65)

4.53 (2.32)*

36.5

33.3

31.4

6.35 (2.45)

6.02 (2.53)

4.31 (2.19)

24.6

21.8

21.6

5.51 (2.57)

5.43 (2.61)

4.58 (2.42)*

30.3
37.3

22.2
27.0

23.3
24.6

5.37 (2.69)
6.04 (2.63)

5.09 (2.71)
5.77 (2.61)

4.11 (2.32)*
4.58 (2.52)*

42.6
29.5
50.4

36.6
23.9
42.4

28.0
25.4
39.0

6.40 (2.56)
5.76 (2.77)
6.76 (2.26)

5.76 (2.47)
5.63 (2.76)
6.37 (2.36)

4.32 (2.19)
4.29 (2.28)*
4.68 (2.44)

30.0

22.7

21.7

5.39 (2.59)

5.19 (2.71)

4.27 (2.30)*

*Responses of o50 people who knew about this test.

three most widely used were WatsonGlaser, GMA


and AMT. There was surprisingly little variability in the
rating. Again differences between the groups were the
results of academics having heard more about the tests
than others.
The motive behind this research was essentially
trying to determine practitioners, rather than researchers, attitudes to, beliefs about, and knowledge of,
psychometric tests. Certainly the validity (predictive,
construct, incremental) of different assessment methods has been hotly debated in the academic literature
(Anderson & Cunningham-Snell, 2000; Cook, 2004).
There still remain important differences in the rank
order of various methods when comparing the results
of different reviewers. Thus Anderson and Cunningham-Snell (2000) put Assessment Centres, then work
samples then ability tests as the top three predictors.
Arnold et al. (2005) on the other hand put structured
interviews, peer ratings and mental ability tests as the
top three in terms of predictive validity.
The use of psychometric tests by practitioners is a
function of many things: their education and experi-

International Journal of Selection and Assessment


Volume 16 Number 3 September 2008

ence; the country they work in; test publisher marketing; popular articles about testing, litigation, etc. Their
knowledge and use of test is based on very different
criteria than that used by academic differential psychologists and psychometricians. Hence some tests remain
very popular among practitioners (FIRO-B, MBTI, Belbin Team Role) despite being little used in research and
frequently condemned by researcher in terms of their
psychometric properties. This study attempted to go
some way to inform academics about the perceptions
of those who use these tests.

References
American Management Association (2001) The 2001 AMA
Survey on Workplace Testing. New York: AMA.
Anderson, N. and Cunningham-Snell, N. (2000) Personnel
Selection. In: Chmiel, N. (ed.), Introduction to Work and
Organizational Psychology. Oxford: Blackwell, pp. 6999.
Arnold, J., Silvester, J., Patterson, R., Robertson, I., Cooper, C.
and Burnes, B. (2005) Work Psychology: Understanding human
behaviour in the workplace. Harlow: Prentice-Hall.

& 2008 The Author


Journal compilation & 2008 Blackwell Publishing Ltd

Assessment Techniques and Psychometric Tests


Bartram, D. (2004) Assessment in Organisations. Applied
Psychology, 53, 237259.
Brown, R. (1999) The Use of Personality Tests: A survey of
usage and practice in the UK. Selection and Development
Review, 15, 38.
Cook, M. (2004) Personnel Selection: Adding value through
people. Chichester: Wiley.
Chapman, D. and Webster, J. (2003) The Use of Technologies
in the Recruiting, Screening, and Selection Process for Job
Candidates. International Journal of Selection and Assessment,
11, 113120.
Hambleton, R. and Oakland, T. (2004) Advances, Issues and
Research Intensity Practises around the World. Applied
Psychology, 53, 155156.
Hausknecht, J., Day, D. and Thomas, S. (2004) Applicant
Reactions to Selection Procedures. Personnel Psychology, 57,
639683.
Hodgkinson, G., Daley, N. and Payne, R. (1995) Knowledge of,
and Attitudes towards the Demographic Time Bomb.
International Journal of Manpower, 16, 5976.
Jeanneret, R. and Silzer, R. (2000) An Overview of Individual
Psychological Assessment. In: Jeanreret, R. and Silzer, R.
(eds), Individual Psychological Assessment. San Francisco:
Jossey Bass, pp. 326.
Klehe, U.-C. (2004) Choosing How to Choose: Institutional
pressures affecting the adoption of personnel selection
procedures. International Journal of Selection and Assessment,
12, 327342.

& 2008 The Author


Journal compilation & 2008 Blackwell Publishing Ltd

305
Kwiatkowski, R. (2003) Trends in Organisations and Selection.
Journal of Managerial Psychology, 18, 382394.
Lievens, F., van Dam, K. and Anderson, N. (2002) Recent
Trends in Challenges in Personnel Selection. Personnel
Review, 31, 580601.
Oakland, T. (2004) Use of Educational and Psychological Tests
Internationally. Applied Psychology, 53, 157172.
Ones, D. and Anderson, N. (2002) Gender and Ethnic Group
Differences on Personality Scales in Selection. Journal of
Occupational and Organisational Psychology, 75, 255276.
Ones, D. and Viswesvaran, C. (1998) Integrity Testing in
Organisations. In: Griffin, R., OLeary-Kelly, A. and Collins,
J. (eds), Dysfunctional Behaviour in Organisations. Greenwich,
CT: AI Press.
Ryan, A.M. and Sackett, P. (1988) Individual Assessment:
The research base. In: Jeanreret, R. and Sulzer, R. (eds),
Individual Psychological Assessment. San Francisco: Jossey
Bass, pp. 5487.
Rynes, S., Orlitzky, M. and Bretz, R. (1997) Experience Hiring
Versus College Recruiting: Practices and emerging trends.
Personnel Psychology, 50, 309339.
Silzer, R. and Jeanneret, R. (2000) Anticipating the
Future; Assessment Strategies for Tomorrow. In: Jeanreret,
R. and Silzer, R. (eds), Individual Psychology Assessment. San
Francisco: Jossey Bass, pp. 445477.
Te Nijenhuis, J., Voskuijl, O. and Schijive, N. (2001) Practice on
Coaching on IQ Tests. International Journal of Selection and
Assessment, 9, 302306.

International Journal of Selection and Assessment


Volume 16 Number 3 September 2008

Vous aimerez peut-être aussi