Vous êtes sur la page 1sur 8

1978

Academy of Management Journal 1978, Vol. 21, No. 1, 129-135

Schneier and Beatty

129

THE INFLUENCE OE ROLE PRESCRIPTIONS ON THE PERFORMANCE APPRAISAL PROCESS CRAIG ERIC SCHNEIER University of Maryland RICHARD W. BEATTY University of Colorado

The appraisal of individual performance is a pervasive organizational issue. Performance appraisal (PA) results typically augment the rationale for several personnel decisions, such as promotion and wage and salary administration (see Cummings & Schwab, 1973). As Cummings (1973) noted, however, the design and use of PAs are often based on prescriptive suggestions rather than empirical field research. One such intuitively appealing and often repeated prescription (see e.g.. Miner, 1971) advocates the use of multiple raters (i.e., superior, peer, subordinate, and/or self ratings) to evaluate an employee. The purpose of this paper is to examine the proposition that the role prescriptions of the multiple raters are a major determinant of raters' judgments. There are several potential advantages to using multiple raters in the PA process. A psychometric advantage is that multiple raters permit the assessment of convergent validity of multiple rating criteria through the multitrait-multirater matrix (Lawler, 1967). Content validity may also be enhanced as observations of ratee performance may be increased (Borman, 1974), thus tapping more of the behavioral domain of the job. There may also be operational advantages because the use of multiple raters widens the participation of relevant persons in the PA process and thus fosters interest and commitment. Behind these potential advantages lies the question as to whether raters occupying different roles would be expected to hold congruent perceptions of ratee performance. That is, persons with different role prescriptions within an organization may develop different expectations for ratees' performance based on their (raters') own jobs. Among those who reported agreement on ratings across roles are Fogli, Hulin, and Blood (1971) and Kavanagh, MacKinney and Wolins (1971). Differences in ratings of a group of ratees by raters occupying different roles were reported in at least as many, and perhaps more, studies (e.g., Bernardin & Alvares, 1975; Heneman, 1974; Thornton, 1968, reviewed by Borman, 1974). With few exceptions (e.g., Borman, 1974), research has compared ratings gathered from occupants of different roles to assess interrater reliability and has assumed the ratings to be similar methods of measurement

130

Academy of Management Journal

March

(e.g., Zedeck & Baker, 1972). Other studies have compared such ratings to assess convergent validity and hence have assumed these to be different methods of measurement (e.g., Heneman, 1974; Lawler, 1967). The assumption of similarity or difference of these methods of measurement is not made explicit. The issue of whether multiple rater perceptions should be expected to converge or diverge has thus been largely ignored in research.
The Influence of Role Prescriptions on Ratings

Divergence between perceptions of those raters occupying different roles would seem to come first from the job environment. Specifically, differences in job duties and proximity, causing differing frequencies and/or duration of observation of ratee performance, could account for divergent ratings given by, for example, superiors and peers. This thesis has some empirical support (Borman, 1974; Zedeck & Baker, 1972). Differences in behavior of raters across organizational levels could also be due to the influence of rater role prescriptions on perceptions of ratee performance. A role is that "set of prescriptions defining what the behavior of a position member should be" (Biddle & Thomas, 1966, p. 29). The notion of role prescriptions (i.e., guidelines for proper behavior by role incumbents) has been used as a powerful explanatory concept in general theories of interpersonal perception (e.g., Jones & Thibaut, 1958) as well as in theories of organizational behavior (e.g., Graen, 1976; Katz & Kahn, 1966). Prescriptions for raters could include the role of evaluator, judge, or critic of subordinates' (ratees') performance. In terms of Katz and Kahn's (1966) role episode, these roles are "sent" to supervisors acting as raters by their own superiors and by perceptions of organizational structure and policies. Peers of raters, on the other hand, are not typically "sent" such formal, evaluator role prescriptions. As supervisors respond from their critic roles, peers may respond from their roles as co-workers, colleagues and friends of ratees. In support of this logic, Zedeck, Imparato, Krausz and Oleno (1974) reported that superiors rated the same behaviors as illustrative of lower levels of performance than did their subordinates. Schneier, Beatty and Beatty (1976) found that supervisors perceived undesirable ratee behaviors as occurring more often than did the ratees' peers. Additional support is provided by Landy and Guion (1970), Kirchner (1965), and Barrett (1963). But whether differing role prescriptions lead to divergent PA perceptions is unresolved. Conflicting research has appeared (e.g., Dickinson & Tice, 1973; Heneman, 1974), and more importantly, research designs (e.g., Borman, 1974; Dickinson & Tice, 1973; Heneman, 1974; Schneier et al., 1976; Zedeck et al., 1974) have continually confounded the reasons for divergenceenvironmental differences and role prescription differences. An empirical test designed to eliminate these confounding effects is described below.

1978

Schneier and Beatty

131

An Empirical Test Four sequential areas of inquiry guided the researchtwo tests of experimental controls and two research hypotheses. First, because the subjects had essentially the same primary job tasks or duties, it was assumed that any two groups of raters would generate essentially the same set of job dimensions upon which to evaluate the ratees. Second, an a priori assumption was made that the two rater groups had equal opportunities to observe ratee performance. If both of these assumptions are confirmed, the experimental controls would appear to be effective. With such controls, differences found either in expectations toward performance and/or in actual ratings would thus seem to come from basic differences in role prescriptions across levels and not from differences in primary tasks or observation frequency (i.e., from the job environment). HypothesesHypothesized differences across hierarchical levels in terms of performance expectations could be tested as raters from two levels indicated the degree of performance illustrated by a group of critical incidents. It is also hypothesized that the superiors would rate the ratees lower than peers, due to their roles as evaluators or critics of performance and dispensers of merit raises and other rewards (see e.g., Klimoski & London, 1974; Thornton, 1968). Method SampleThe research was conducted in a medium-sized manufacturing company. The two roles used were entry-level manufacturing workers (n=74) and their immediate superiors (n=15) who worked closely with their subordinates in teams. ProcedureIn order to assess perceptions of job-task similarity between the two roles, small groups of members from each role were asked to generate or "brainstorm" a list of PA criteria reflecting their job duties and then to weigh them in degree of importance. Second, to assess assumed similarity in frequency of observation of ratee performance across roles, members of each role were to indicate how often they perceived each of a set of 183 critical incidents as actually occurring among ratees. Third, each member of the two roles was to indicate, using a Likert scale, the degree of performance (excellent to unacceptable) illustrated by each of the 183 incidents. Finally, randomly selected ratees (n=31) were each evaluated by three to five peers and two to three superiors chosen at random from those who had worked on the same shift as the ratee. Resnlts The superior group developed 14 PA criteria reflecting their job duties. Given the same instructions, the subordinate group independently developed a list of 13 dimensions coinciding with the superiors' list with one exception. Randomly selected groups of 24 subordinates and 12 superiors

132

Academy of Management Journal

March

TABLE 1 Mean Ranks for Job Dimensions


Mean Superior Mean Subordinate Rank (n = 24) Rank (n = 12) 2.6 2.5 Drying System flow 6.5 6.8 Record keeping 8.6 8.0 Grinding 2.0 2.1 Attendance 8.9 8.7 Centrifugation 5.6 5.1 Filtration 5.6 5.5 Initiative 11.1 11.0 8.8 8.6 Reaction control Safety 12.6 12.2 Dependability 8.8 9.9 Distillations 6.6 7.0 Housekeeping S.6 11.0 Communication 11.0 11.5 All tests were two-tailed; critical t for n of 40 = 2.021. Job Dimension t value -.139 .492 .884 .126 .259 .636 .135 .137 .252 .435 -1.149 .574 -1.782 .579

then ranked each of the 14 job dimensions for importance as PA criteria. Table 1 shows mean ranks for each of the 14 dimensions, as well as r-tests for differences between mean ranks. As none of the r-tests were significant, the two groups were assumed to be in agreement on the weighting of the job dimensions. A /-test for dependent measures (paired differences) was used to assess divergence in the two groups' perceptions of how frequently the 183 behavioral incidents actually occurred on the job. The result of this test indicated general agreement across rater groups on the frequency of observed behavior, r(182) = .270, ns. A /-test for dependent measures was again used to assess superior-subordinate differences given the behavioral incidents. The test indicated a difference between the groups, /(182) = 2.137, p < .025. Superiors were more lenient in that they assigned higher scale values to incidents indicative of subordinate performance than did the subordinates themselves (i.e., they felt behaviors were illustrative of better performance than did their subordinates). Mean ratings given each of the 31 ratees by the superior and peer raters at two different time periods four months apart were used as data points in a two-factor, repeated measures analysis of variance (ANOVA). A significant rater role effect (F[l,90] = 55.894; p < .001) and a significant time period effect (F[l,90] = 16.152; p < .001) were found. The interaction effect was not significant (F[l,90] = .353; ns). The table of means for these data (Table 2) indicates that, as predicted, superiors gave lower ratings at both time periods than peers. Discussion and Conclusion Findings of this research indicate that (a) given a priori assumptions of similarities in job tasks, members of two different roles relative to ratees gen-

1978

Schneier and Beatty

133

TABLE 2 Mean Performance Ratings Across Raters and Time Periods


Period Superiors Peers 348.! 358.3

I 324.3 2" 337.9 Time periods were separated by a four-month interval.

erally agree as to their identification of PA criteria and weights of those criteria; (b) given a priori assumptions of similarity in frequency of observation of ratees, members of two roles generally agree as to their perceptions of frequency of occurrence of specific ratee behaviors; (c) given equal job tasks and observation frequency, occupants of two different roles disagree as to their perceptions of behaviors desired or expected for successful performance; and (d) given equal job tasks and observation frequency, occupants of two different roles disagree as to their actual ratings of ratees. The divergent perceptions of expected performance held by occupants of two different roles, as well as their divergent actual ratings, both point to a fundamental difference in role prescription across levels which, if supported by further research, would seem to have important implications for the psychometric and operational aspects of the PA process. The divergence found here could in part explain the disappointing interrater reliability and convergent validity evidence found in psychometric studies employing members of two different organizational levels as raters (e.g., Lawler, 1976; Zedeck & Baker, 1972). That is, to the extent that there are fundamental differences in orientation across different roles relative to ratees, high interrater reliability correlation coefficients cannot be expected. The second implication of the findings of the present research concerns practical or operational aspects of appraisal. In this regard, the specific nature of the differences found here between hierarchical levels is noteworthy. This study found superiors giving higher scale values to incidents than subordinates, contrary to the hypothesis. However, in keeping with their role prescriptions as critics of performance, superiors reversed their lenient orientation when actually evaluating the ratees and gave lower ratings than did the peer raters at each of two different time periods. This pattern of results has also been noted in past research (e.g., Klimoski & London, 1974; Zedeck et al, 1974). Because many unsupported suggestions are offered to PA system designers concerning the utility of using raters from more than one role in PA, differences between role perceptions should be explored within each organization. Assessment of groups' perceptions regarding desired performance, such as was done here, would easily enable decision makers to ascertain empirically whether or not members of two roles have enough of a common perspective regarding performance to use

134

Academy of Management Journal

March

the same rating scale format and whether to anticipate agreement from them. The present study, as well as other recent PA research (e.g., Grey & Kipnis, 1976; Hakel, 1974; Scott & Hamner, 1976), signals a marked shift from almost total concentration on appraisal formats and their psychometric properties to the investigation of various rater characteristics which influence the results and operation of PA systems. This recent research emphasis has advanced understanding of the PA process (i.e., recalling performance recall, trait attribution, human judgment and decision making, perceptual selection and bias), as well as the outcomes of the process (i.e., the ratings themselves). REFERENCES
1. Barrett, R. S. "Performance Suitability and Role Agreement: Two Factors Related to Attitudes," Personnel Psychology, Vol. 16 (1963), 345-367. 2. Bernardin, H. J., atid K. M. Alvares. "The Effects of Organizational Level on Perceptions of Role Conflict Resolution Strategy," Organizational Behavior and Human Performance, Vol. 14 (1975), 1-9. 3. Biddle, R., and E. J. Thomas. Role Theory (New York: Wiley, 1966). 4. Borman, W. C. "The Rating of Individuals in Organizations: An Alternate Approach," Organizational Behavior and Human Performance, Vol. 12 (1974), 105-124. 5. Cummings, L. L. "A Field Experimental Study of the Effects of Two Performance Appraisal Systems," Personnel Psychology, Vol. 26 (1973), 489-502. 6. Cummings, L. L., and D. P. Schwab. Performance in Organizations (Glenview, 111.: Scott Foresman, 1973). 7. Dickinson, T. L., and T. L. Tice. "A Multitrait-Multimethod Analysis of Scales Developed by Retranslation," Organiaztional Behavior and Human Performance, Vol. 8 (1973), 421-438. 8. Fogli, L., C. L. Hulin, and M. R. Blood. "Development of First-Level Behavioral Job Criteria," Journal of Applied Psychology, Vol. 55 (1971), 3-8. 9. Graen, G. "Role-Making Processes within Complex Organizations," in M. D. Dunnette (Ed.), Handbook of Industrial and Organization Psychology, (Chicago: Rand McNally, 1976), 1201-1246. 10. Grey, R. J., and D. Kipnis. "Untangling the Performance Appraisal Dilemma: The Influence of Perceived Organizational Context on Evaluative Processes," Journal of Applied Psychology, Vol. 61 (1976), 329-335. 11. Hakel, M. D. "Normative Personality Factors Recovered from Ratings of Personality Descriptors: The Beholder's Eye," Personnel Psychology, Vol. 27 (1974), 409-421. 12. Heneman, H. G. "Comparison of Self and Superior Ratings of Managerial Performance," Journal of Applied Psychology, Vol. 59 (1974), 638-642. 13. Jones, E. E., and J. W. Thibaut. "Interaction Goals as Bases of Inference in Interpersonal Perception," in R. Tagiui and L. PetruUo (Eds.) Person Perception and Interpersonal Behavior (Stanford, CaWt.: Stanford University, 1958), 151-178. 14. Katz, D., and R. L. Kahn. The Social Psychology of Organizations (New York: Wiley, 1966). 15. Kavanaugh, M. J., A. C. MacKinney, and L. Wolins. "Issues in Managerial Performance: Multitrait-Multimethod Analyses of Ratings," Psychological Bulletin, Vol. 75 (1971), 34-49. 16. Kirchner, W. K. "Relationships between Supervisory and Subordinate Ratings for Technical Personnel," Journal of Industrial Psychology, Vol. 3 (1965), 57-60. 17. Klimoski, R. J., and M. London. "Role of the Rater in Performance Appraisal," Journal of Applied Psychology, Vol. 59 (1974), 445-451. 18. Landy, F. J., and R. M. Guion. "Development of Scales for the Measurement of Work Motivation," Organizational Behavior and Human Performance, Vol. 5 (1970), 93-103. 19. Lawler, E. E. "The Multitrait-Multirater Approach to Measuring Managerial Job Performance," Journal of Applied Psychology, Vol. 51 (1967), 369-381.

1978

Schneier and Beatty

135

20. Miner, J. B. "Management Appraisal: A Capsule Review and Current References," in W. L. French and D. Hellriegel (Eds.), Personnel Management and Organization Development (Boston: Houghton Mifflin, 1971), 247-261. 21. Schneier, C. E., R. W. Beatty, and J. R. Beatty. "An Empirical Investigation of Perceptions of Rater Behavior Frequency and Ratee Behavior Change Using Behavioral Expectation Scales (BES)" (Unpublished paper. University of Maryland, 1976). 22. Scott, W. E., and W. C. Hamner. "The Influence of Variations in Performance Profiles on the Performance Evaluation Process: An Examination of the Validity of the Criterion," Organizational Behavior and Human Performance, Vol. 14 (1975), 360-370. 23. Thornton, G. C. "The Relationship Between Supervisory and Self-Appraisals of Executive Performance," Personnel Psychology, Vol. 21 (1968), 441-455. 24. Zedeck, S., and H. T. Baker. "Nursing Performance as Measured by Behavioral Expectation Scales: A Multitrait-Multirater Analysis," Organizational Behavior and Human Performance, Vol. 7 (1972), 457-466. 25. Zedeck, S., N. Imparato, M. Frausz, and T. Oleno. "Development of Behaviorally Anchored Rating Scales as a Function of Organizational Level," Journal of Applied Psychology, Vol. 59 (1974), 249-252.

Academy of Management Journat 1978, Vol. 21, No. 1, 135-140

INFLUENCE SOURCES OF PROJECT AND FUNCTIONAL MANAGERS IN MATRIX ORGANIZATIONS EDWARD J. DUNNE, JR. MICHAEL J. STAHL LEONARD J. MELHART, JR. Air Force Institute of Technology

A topic of interest to those who study and practice project management is the influence or authority structure in an organization which contains projects. Formal authority typically resides primarily with managers in functional areasengineering, procurement, production, et ceterabut a project manager usually has responsibility for coordinating efforts across several functional areas. The influence exerted must be based on more than formal authority. French and Raven (1959) classify five different power bases as sources of influence: legitimate power, reward power, coercive power, expert power, and referent power. This typology has been examined in functional organizations. Ivancevich and Donnelly (1970) found that expert and referent power were positively associated with measures of organizational effectiveness. Bachman, Bowers and Marcus (1968) found that legitimate power was important for complying with supervisors' requests and that expert power was positively associated with subordinate satisfaction and performance. In project management organizational settings, Lucas (1973) and Hodgetts (1968) both interviewed project managers concerning sources of influence. Lucas found that project managers tend to discount formal authority and rate personal persuasiveness as the most important source of influence. Hodgetts found that the most important techniques project managers used to supplement authority were technical competence and persuasion. On the

Vous aimerez peut-être aussi