Vous êtes sur la page 1sur 8

The Journal of Emergency Medicine, Vol. 43, No. 4, pp. 720727, 2012 Copyright 2012 Elsevier Inc.

. Printed in the USA. All rights reserved 0736-4679/$ - see front matter

doi:10.1016/j.jemermed.2011.05.069

Education
REPORTER-INTERPRETER-MANAGER-EDUCATOR (RIME) DESCRIPTIVE RATINGS AS AN EVALUATION TOOL IN AN EMERGENCY MEDICINE CLERKSHIP
Douglas S. Ander, MD, Joshua Wallenstein, MD, Jerome L. Abramson, MD, PHD, Lorie Click, MPH, and Philip Shayne, MD
Department of Emergency Medicine, Emory University School of Medicine, Atlanta, Georgia Reprint Address: Douglas S. Ander, MD, Department of Emergency Medicine, Emory University School of Medicine, 49 Jesse Hill Jr. Dr., Atlanta, GA 30303

, AbstractBackground: Emergency Medicine (EM) clerkships traditionally assess students using numerical ratings of clinical performance. The descriptive ratings of the Reporter, Interpreter, Manager, and Educator (RIME) method have been shown to be valuable in other specialties. Objectives: We hypothesized that the RIME descriptive ratings would correlate with clinical performance and examination scores in an EM clerkship, indicating that the RIME ratings are a valid measure of performance. Methods: This was a prospective cohort study of an evaluation instrument for 4th-year medical students completing an EM rotation. This study received exempt Institutional Review Board status. EM faculty and residents completed shift evaluation forms including both numerical and RIME ratings. Students completed a nal examination. Mean scores for RIME and clinical evaluations were calculated. Linear regression models were used to determine whether RIME ratings predicted clinical evaluation scores or nal examination scores. Results: Four hundred thirty-nine students who completed the EM clerkship were enrolled in the study. After excluding items with missing data, there were 2086 evaluation forms (based on 289 students) available for analysis. There was a clear positive relationship between RIME category and clinical evaluation score (r2 = 0.40, p < 0.01). RIME ratings correlated most strongly with patient management skills and least strongly with humanistic qualities. A very weak correlation was seen with RIME and nal examination. Conclusion: We found a positive association between RIME and clinical evaluation scores, suggesting that RIME is a valid clinical evaluation

instrument. RIME descriptive ratings can be incorporated into EM evaluation instruments and provides useful data related to patient management skills. 2012 Elsevier Inc. , Keywordsundergraduate medical education; evaluation

INTRODUCTION Students on emergency medicine (EM) clerkships are evaluated using a variety of evaluation methods (1). Although clerkship directors use a variety of other evaluation instruments to assess clinical competencies, global assessments of live clinical performance by faculty and residents remains the predominant component of evaluations in most EM clerkships. EM course directors face unique challenges compared to other disciplines. In most other specialty clerkships, there is an ongoing relationship between the student and teacher/evaluator ranging from weeks to months or longer. Students in EM may work with multiple faculty members, and a single faculty member may not work with a student for more than a single shift. In our institution, during a 4-week rotation students may work with 10 different faculty members and see as few as 46 patients per shift. Although this arrangement has some advantages to the student and clerkship director, it poses unique challenges to the learner and the teacher/ evaluator, primarily the delivery of reliable performance

RECEIVED: 27 October 2010; FINAL SUBMISSION RECEIVED: 5 January 2011; ACCEPTED: 28 May 2011
720

Reporter-Interpreter-Manager-Educator Descriptive Ratings

721

assessment and constructive feedback based on clinical interactions. Despite the ease of use, clinical evaluation scores lack interobserver reliability and suffer from limited discrimination between various evaluation domains (24). Additionally, clinical scores may not give the student descriptive anchors to adequately describe weaknesses. First described in 1999, the Reporter-InterpreterManager-Educator (RIME) terminology provides a framework to assess a students performance in the clinical setting (5,6). The RIME approach to assessment was developed for Internal Medicine to be used during formal student evaluation sessions, providing the teachers with a simple framework to categorize student progress that the student would accept as valid feedback (5,6). The reliability of the RIME has been established across geographically distant sites, and when used in a system with evaluation sessions, it has been found to correlate with a students performance on the National Board shelf examination and detect deciencies in professionalism. It has also been shown to forecast low scores for performance as an intern when used in a comprehensive system including evaluation sessions and committee review (79). One study comparing the RIME system to standard clerkship evaluation forms demonstrated that RIME could better detect and describe changes in a students performance over time (10). Another study demonstrated that it helps students understand their performance during feedback sessions (11). Although the advantages of RIME over traditional clinical evaluation have been shown, there are also barriers to using it in EM. RIME was originally designed to provide a standard vocabulary for describing the progress of trainees and the use of formal evaluation sessions. The published work on the reliability and validity of RIME relates to its use in evaluation sessions. However, many clerkship directors in Internal Medicine utilize RIME without formal evaluation sessions with students (12). The RIME terminology has been shown to enhance the evaluation and feedback process, but has never been used in the setting of an EM clerkship. The objective of our study was to determine the extent to which RIME descriptive ratings in the EM setting correlate with the clinical evaluation scores and performance on a multiple-choice nal examination. We hypothesized that the RIME descriptive rating would correlate with these other evaluation methods, suggesting that RIME terminology is a valid method of assessment in an EM clerkship. METHODS Study Design This was a prospective cohort study of an evaluation instrument for 4th-year medical students completing

a rotation in EM. As an addition to the standard evaluation process, this study was considered exempt by the Institutional Review Board. Study Setting and Population This study was conducted at a University medical school with a required 4th-year EM clerkship. The clerkship utilizes ve distinct training sites including a county hospital, a university medical center, a community hospital, and two pediatric hospitals. All the evaluators were faculty or senior residents in the Department of Emergency Medicine. The subjects were 4th-year medical students enrolled in our clerkship between 2005 and 2007, including both students from our institution and visiting students from other medical schools. Evaluation Instrument Development of the study evaluation form used was based on review of evaluation forms used in other programs and a literature review, and underwent nal review and approval by the Undergraduate Education Committee within our Department. The evaluation instrument contained several components. Part 1 used the RIME descriptive rating. The RIME descriptive rating was based on individual shift interactions between the evaluator and the student. In addition, our denition of an Educator was somewhat different than what was described in the original papers. We used the RIME descriptive rating during individual shift encounters with students, during which they may see 46 patients with the evaluator (Figure 1). Part 2 of the form is a traditional global assessment of live performance using detailed clinical evaluation of 15 EM competencies grouped within the framework of the six Accreditation Council on Graduate Medical Education core competencies (Figure 1) (13). Study Protocol EM faculty and residents were trained on the proper use of the form and its components at the beginning of the academic year. They received a lecture describing the theory behind the evaluation instrument, how to use the form, and case examples. During orientation to the clerkship, students were asked to document their eld of interest and their school afliation. The students received instruction on the distribution of the form to the faculty and residents. Forms were completed using either a paper form or a Web page. All the data were compiled in an internally developed database. At the end of the rotation, the students completed a 71-question multiplechoice examination covering material from our didactic

722
Student: ____________________ Date: ____________________

D. S. Ander et al.
Faculty: _______________________

Written Comments are required regarding overall performance of the student Strengths: Needing Improvement: Specific Examples: Plan for improvement:

Please check each step the student has consistently reached:

Reporter

Interpreter

Manager

Educator

Reporter level student: Performs acceptably in some areas of evaluation but clearly needs improvement in others. Interpreter level student: Performs acceptably in most areas of evaluation. Obtains and reports basic information accurately; beginning to interpret; some attempt to actively manage patient care; solid personal/professional qualities. Manager level student: Clearly well above average in most areas of evaluation. Proceeds consistently to interpreting data; Solid ability to actively manage patient care. Educator level student: Outstanding ratings in most major areas of evaluation. Open to new knowledge and skilled in identifying questions that cant be answered from textbooks. Is able to consistently manage patient care. This student performs at a level far superior to his level of training. Are there any concerns of professionalism with this student? *If yes, MUST comment with specifics Yes* No

For each area of evaluation, please check the appropriate level of ability. Qualities should be cumulative as rating increases. Indicate the level at which the student is consistent. PATIENT CARE History Taking Unable to elicit important information or nonverbal cues. Often fails to identify major problem.

Incomplete or unfocused.

If Not Observed, Check Here Adequate history. History is complete Focused on the major & accurate. Details problem. Accurate. were appropriate to the setting.

History is comprehensive. Accurate & focused on key pertinent problems. Identifies subtle problem areas.

Physical Exam Unreliable physical examination.

If Not Observed, Check Here Incomplete exam. Minor gaps in Technically sound & Missed major findings. technical skill. Major thorough exam. findings were Organized focused, and identified. relevant.

Thorough, detailed exam, yet focused to primary complaint. Uses pertinent ancillary techniques.

Problem Solving/Management Plans Fails to formulate an Limited differential adequate plan. Poor diagnostic ability. judgment in selection or Formulates use of diagnostics & inappropriate diagnostic therapeutics. and therapeutics.

If Not Observed, Check Here Identified major Identified major & problems. Able to minor problems. formulate a basic plan Develops a complete & including selection of efficient plan for diagnostics & diagnostics & therapeutics therapeutics.

Developed an extensive problem list. Plan is thorough and precise. Identifies alternative plans.

Patient management skills Fails to monitor Does not always patient responses to monitor patient treatment and make response to treatment or adjustment after the make indicated initial workup. Unable adjustments after initial to manage multiple workup. Fair ability to patients. manage multiple patients.

If Not Observed, Check Here Monitors response Above average to treatment and ability to monitor adjusts as indicated response to treatment after initial workup. and make adjustments Average ability to to treatment plan. Can manage multiple manage multiple patients. patients efficiently.

Closely monitors patients responses to treatment after initial workup; makes astute adjustments as needed; excellent ability to manage multiple patients.

Figure 1. Clinical evaluation form.

Reporter-Interpreter-Manager-Educator Descriptive Ratings


Procedural Skills Poor knowledge of and/or ability in technique. Insensitive to patient needs. Health Promotion Unaware of community resources and makes no effort to learn. If Not Observed, Check Here Occasional difficulty Appropriate with knowledge and/or knowledge and/or ability in technique. ability in procedural techniques. If Not Observed, Check Here Uses appropriate Uses appropriate community resources community resources when pointed out by independently. faculty or residents.

723

Awkward, reluctant to try even basic procedures.

Outstanding level of knowledge and ability in procedural techniques.

Disregards community resources when treating patients.

Fully and consistently uses available community resources and proactive looking for appropriate community resources.

MEDICAL KNOWLEDGE Knowledge Base Cannot recall basic Marginal science & clinical understanding of basic information. and clinical sciences as Demonstrates poor it relates to their ability to clinically patients. apply knowledge base.

If Not Observed, Check Here Has basic Above average knowledge base, and knowledge. Able to shows the ability for consistently relate to some clinical clinical material. application.

Outstanding fund of knowledge & understanding of disease mechanisms with excellent ability to apply to clinical situations.

PRACTICE BASED LEARNING Use of Medical Literature Fails to consider use Uses inappropriate of the medical sources when attempting literature. to use the medical literature.

If Not Observed, Check Here With prompting is Actively seeks out able to support medical literature that decisions using supports decision appropriate medical making. literature.

Superb use of medical literature and is able to teach when they have learned.

INTERPERSONAL AND COMMUNICATION SKILLS Humanistic Qualities If Not Observed, Check Here Often insensitive to Occasionally Sometimes has Relates well to most patients feelings, insensitive to patients difficulty establishing patients & family needs, & wishes. Lack feelings. Inattentive to rapport or members. Shows of empathy & patient needs. communicating with empathy & compassion. patients. compassion.

Outstanding in putting patients &/or family members at ease & appropriately communicates with them. Relates well to difficult patients.

Works as part of a Health Care Team Disrespectful, rude, Occasionally fails and insensitive to other to act collegially with members of the health other members of the care team. health care team.

If Not Observed, Check Here Communicates well, Strong respectful, and communication skills cooperative with other and professional members of the health demeanor with other care team. members of the health care team.

Mature and collegial. Communicates expertly with other members of the health care team.

Written Note Medical record is poor, inadequate, or inaccurate

If Not Observed, Check Here Medical record has Medical record is Medical record is occasional voids. usually accurate always accurate and medical record well organized. including the medical decision portion and progress notes.
Includes irrelevant facts, ramblings, but with no major omissions. If Not Observed, Check Here Organized and Organized and provided the basic complete presentation. information but may be Attempt to chronicle verbose, or have key events in patients holes. Dependent on illness. Minimal use of written prompters written prompters.

Medical record is excellent. Always accurate, well organized, and appropriate for level of care.
Complete, concise, orderly, & polished. Clear delineation of primary problems, excellent characterization, accurate chronology of key events.

Presentation Skills Presentations are disorganized & incomplete with major omissions.

PROFESSIONALISM Work Ethic Late. Whereabouts are often unknown. Level of commitment questionable. Sensitivity Often seen as insensitive to the needs of the patient and is unresponsive to the diverse populations.

If Not Observed, Check Here Punctual. Appears Punctual. Can be peripheral to team relied upon to fulfill all activities & patient required responsibilities care. of patient care. If Not Observed, Check Here Unable to provide Appropriate the required sensitivity sensitivity and or responsiveness to the responsiveness to the needs of the patient and needs of the patient and the diverse populations. the diverse populations.

Attempts to seek new responsibilities.

Exceptionally conscientious. Assumes high levels of responsibilities.

Above average sensitivity and responsiveness to the needs of the patient and the diverse populations.

Exceptional sensitivity and responsiveness to the needs of the patient and the diverse populations.

SYSTEM-BASE PRACTICE Cost-Effective healthcare No understanding of Limited cost issues when understanding of cost developing management issues when developing plans. management plans.

If Not Observed, Check Here Modifies plans to Above average consider costs when ability to consider cost prompted by faculty or issues when developing residents. management plans.

Exceptional ability to addresses cost issues when developing management plans.

Evaluator Name:

Evaluator Signature:

Douglas S. Ander, MD; Emory University School of Medicine

Figure 1. (continued).

724

D. S. Ander et al.

series, which was scored from 0 to 100% correct. This test has been developed internally by the Clerkship Director, EM faculty, and the EM Education Committee. Data Analysis Categorical variables including year of evaluation, month of evaluation, type of evaluator, medical school, and RIME classication were calculated in terms of percents. Continuous variables, clinical evaluation score, and examination score were calculated in terms of the mean ( 6 SD). For each of the 15 competencies, a student was graded on a scale of 04 points, with 0 representing the lowest level of competence and 4 representing the highest level of competence. Descriptive anchors were added to each competency level in each competency to improve the reliability of this portion of the evaluation form. Points from 9 of the 15 items were then summed to arrive at an overall clinical evaluation score, with possible scores ranging from 0 (lowest score) to 36 (highest score). We excluded procedural skills, health promotion, use of medical literature, medical note, sensitivity, and cost-effective health care due to a predominance of evaluators documenting that they were not observed. Missing data for these items ranged from 53% to 80%. The next step in the analysis was to assess the extent to which RIME scores were associated with other methods of evaluating students, namely the clinical evaluation score and the nal examination score. The associations were assessed by running linear regression models, in which RIME categories were the independent variables (entered as a series of dummy variables to represent each category), and the clinical evaluation score or nal examination score was the dependent variable (entered as continuous variables). From the models, we calculated mean level of each of the outcome variables according to each RIME level. Additionally, as a measure of the association between RIME categories and the outcome variables, r2 values were derived from each model. These r2 values represented the amount of variation in each outcome that was explained by RIME categories. We ran unadjusted models, as well as models that were adjusted for the potential predictors of the outcome, such as year of evaluation, evaluator status (faculty, resident, pediatrics), and school of the student. The data set included multiple evaluations per student. Thus, the unit of analysis in the models was evaluation. To correct for the fact that evaluations were correlated within the same student, p-values on our models were calculated with robust standard errors. All analyses were based on subjects that had complete data on all variables of interest. All data were analyzed with Stata Version 10 (Stata Corporation, College Station, TX).

RESULTS Four hundred thirty-nine students who completed the EM clerkship were enrolled in the study. There were 4834 shift evaluation forms submitted for these students. After excluding all items and covariates with missing data (portions of the 15 competencies), there were 2086 evaluation forms for 289 students available for analysis. There was no apparent difference between the included and excluded shift evaluation forms by either type of evaluator or nal student grade. The percentage of evaluations done by the three types of evaluators was very similar for included and excluded evaluations. Included evaluations: 26% done by residents, 58% done by faculty, and 16% done by pediatric faculty, compared to excluded evaluations: 23% done by resident, 56% done by faculty, and 21% done by pediatric faculty. Final grade distribution for included versus excluded evaluations was similar. Included evaluations: 22% had A, 74% had B, and 4% had C or lower compared to excluded evaluations: 22% had A, 74% had B, and 4% had C or lower. Based on this comparison, it seems that the included and excluded evaluations were similar with respect to type of evaluator and grade distribution. Descriptive statistics are presented in Table 1. Evaluations occurred during the years 20052007, and the most common time of year for an evaluation to take place was September through December. The majority of evaluators were EM (non-pediatric) faculty and residents, and
Table 1. Descriptive Statistics for Emergency Medicine Clerkship Evaluations (n = 2086 Evaluations on 289 Students) Year of evaluation, n (%) 2005 2006 2007 Calendar month of evaluation, n (%) JanuaryApril MayAugust SeptemberDecember Evaluator, n (%) Resident Faculty, Department of Emergency Medicine Pediatric faculty, Department of Emergency Medicine Students school, n (%) Emory Other RIME classication, n (%) Reporter Interpreter Manager Educator Clinical evaluation score, mean (6 SD) Examination score, mean (6 SD) 1069 (51.2) 529 (25.4) 488 (23.4) 795 (38.1) 219 (10.5) 1072 (51.4) 538 (25.8) 1209 (58.0) 339 (16.2) 1403 (67.3) 683 (32.7) 86 (4.1) 502 (24.1) 1173 (56.2) 325 (15.6) 28.3 6 5.1 83.5 6 6.7

RIME = Reporter-Interpreter-Manager-Educator method.

Reporter-Interpreter-Manager-Educator Descriptive Ratings


Table 2. Ability of RIME Categories to Explain Variance in Individual Items on Clinical Evaluation Form Individual Item from Clinical Evaluation Form Patient management Problem management Knowledge base History-taking Presentation skills Physical examination Work ethic Works as part of a team Humanistic qualities r2 Due to RIME Categories .33 .33 .30 .28 .27 .26 .21 .20 .15

725

Note: p-values for all r2 values are p < 0.001. RIME = Reporter-Interpreter-Manager-Educator method.

believe that RIME descriptive ratings are a valid evaluation modality in EM and may provide other information that complements traditional clinical evaluations. We have shown that when a RIME descriptive rating question is added to an end-of-shift evaluation, almost all faculty members complete both portions of the evaluation, suggesting that the addition of RIME will not detract from other portions of the evaluation. EM course directors can gain useful predictive data through the addition of RIME descriptive ratings. Based on our study methodology, we are not able to state that RIME descriptive ratings are more valid than other forms of clinical evaluation in EM. Future work could compare various evaluation tools to future performance in residency. Limitations Our methodology did not include a system for ongoing rater training and calibration. This may have impacted the way the raters used the RIME classication, although it is unusual for any clerkship or residency director to fully train and assess the ability of their faculty to use evaluation measures. The evaluation form included a description of each RIME classication as a guide, providing continuous education each time the form was completed. Faculty and residents are aware that this is not a grade but only an evaluation of the students performance at one point in time. Grading takes into account several objective and subjective components, which is the reason we did not compare nal grades to the RIME classication. Similarly, the reliability of the nal examination is not known. The form design with both measurements on the same form may have led to a bias. Because RIME is different from the typical methods of evaluation during an emergency department shift, we believe that despite being on the same form, the RIME measurement was not affected by the other scores. We compared the RIME score to the overall clinical score, but other measures may exist that are more reliable or valid assessments of clinical performance, such as observed patient assessments, objective standardized clinical examinations, simulation, or future clinical performance. Given the lack of a gold standard, we compared RIME to the classic numerical rating of clinical performance, arguably the most common form of evaluation used in clinical clerkships. We are condent that this was a reasonable approach because our evaluation form has a sophisticated rating grid with a total of 15 categories, each with ve descriptive anchors to assist the evaluator. CONCLUSION This is the rst attempt by an EM clerkship to use a new evaluation instrument that has proven successful in other

two-thirds of the subjects were students from our institution. The most common RIME descriptor used to classify the subjects was manager (Table 1). Average clinical evaluation scores and nal examination scores were 28.3 6 5.1 and 83.5 6 6.7, respectively, which were towards the maximum potential scores for these evaluation scales. There was a clear positive relationship between RIME category and clinical evaluation score (r2 = 0.40, p < 0.01). This result was essentially unchanged upon adjustment for month of evaluation, evaluator, and school of student. RIME descriptive ratings most strongly correlated with patient management skills and least strongly correlated to humanistic qualities (Table 2). Adjustment for other factors did not change these associations. There was only a weak association between RIME descriptive rating and nal examination score. DISCUSSION RIME is increasingly being used by clerkship directors as part of the feedback and evaluation process. Currently published literature exists for its use in internal medicine and obstetrics and gynecology, but there is no published literature studying its use in EM (14,15). The RIME methodology has proven advantageous over traditional numerical evaluations and its use in EM clerkship evaluations could benet both students and educators. Our results show that RIME and traditional clinical evaluation scores are positively related, though the moderate r-squared between these two measures suggests that they may be measuring different concepts. RIME descriptive ratings associated most strongly with technical competencies (patient and problem management, knowledge base, history-taking, and presentation skills) and less strongly yet still reasonably with professional competencies (work ethic, humanistic qualities, and team-working qualities). Based on these ndings, we

726

D. S. Ander et al.
7. Grifth CH 3rd, Wilson JF. The association of student examination performance with faculty and resident ratings using a modied RIME process. J Gen Intern Med 2008;23:10203. 8. Hemmer PA, Hawkins R, Jackson JL, Pangaro L. Assessing how well three evaluation methods detect deciencies in medical students professionalism in two settings of an internal medicine clerkship. Acad Med 2000;75:16773. 9. Durning SJ, Pangaro LN, Lawrence LL, Waechter D, McManigle J, Jackson JL. The feasibility, reliability, and validity of a program directors (supervisors) evaluation form for medical school graduates. Acad Med 2005;80:9648. 10. Hemmer PA, Pangaro L. Can a descriptive evaluation system detect student growth during a clerkship? Using descriptive evaluation to detect student growth. Proceedings from Annual 2000 Meeting of the Clerkship Directors of Internal Medicine. Teach Learn Med 2001;13:199205. 11. DeWitt DE, Carline D, Paauw DS, Pangaro L. A pilot study of a "RIME" framework-based tool for giving feedback in a multispecialty longitudinal clerkship. Med Educ 2008;42:12059. 12. Hemmer PA, Papp KK, Mechaber AJ, Durning SJ. Evaluation, grading, and use of the RIME vocabulary on internal medicine clerkships: results of a national survey and comparison to other clinical clerkships. Teach Learn Med 2008;20:11826. 13. Accreditation Council on Graduate Medical Education (ACGME). Common program requirements: general competencies: Available at: http://www.acgme.org/outcome/comp/GeneralCompetencies Standards21307.pdf. Accessed April 21, 2010. 14. Battistone MJ, Pendeleton B, Milne C, et al. Global descriptive evaluations are more responsive than global numeric ratings in detecting students progress during the inpatient portion of an internal medicine clerkship. Acad Med 2001;76:S1057. 15. Ogburn T, Espey E. The R-I-M-E method for evaluation of medical students on an obstetrics and gynecology clerkship. Am J Obstet Gynecol 2003;189:6669.

specialties. We found a positive association between RIME descriptive ratings and the numerical clinical evaluation scores, indicating that RIME is a valid evaluation instrument in an EM clerkship, complementing the classical evaluation form. The modest association suggests that RIME measures different student characteristics not characterized on our clinical evaluation form.

REFERENCES
1. Bandiera GW, Morrison LJ, Regehr G. Predictive validity of the global assessment form used in a nal-year undergraduate rotation in emergency medicine. Acad Emerg Med 2002;9:88995. 2. Noel GL, Herbers JE, Caplow MP, Cooper GS, Pangaro LN, Harvey J. How well do internal medicine faculty members evaluate the clinical skills of residents? Ann Intern Med 1992;117: 75765. 3. Ryan JG, Madel FS, Sama A, Ward ME. Reliability of faculty clinical evaluations of non-emergency medicine residents during emergency department rotations. Acad Emerg Med 2008;3: 112430. 4. LaMantia J, Rennie W, Risucci DA, et al. Interobserver variability among faculty in evaluations of residents clinical skills. Acad Emerg Med 2008;6:3844. 5. Pangaro L. A new vocabulary and other innovations for improving descriptive in-training evaluations. Acad Med 1999;74:12037. 6. Hemmer PA, Pangaro L. Using formal evaluation sessions for casebased faculty development during clinical clerkships. Acad Med 2000;75:121621.

Reporter-Interpreter-Manager-Educator Descriptive Ratings

727

ARTICLE SUMMARY 1. Why is this topic important? This is the rst attempt by an emergency medicine (EM) clerkship to use a new evaluation instrument that has proven successful in other specialties. 2. What does this study attempt to show? We hypothesized that the Reporter-InterpreterManager-Educator (RIME) descriptive ratings would correlate with clinical performance and examination scores in an EM clerkship, indicating that the RIME ratings are a valid measure of performance. 3. What are the key ndings? We noted a positive association between RIME and clinical evaluation scores, suggesting that RIME is a valid clinical evaluation instrument. RIME descriptive ratings can be incorporated into EM evaluation instruments and provide useful data related to patient management skills.

Vous aimerez peut-être aussi