Vous êtes sur la page 1sur 114

PHYSICIAN PROFILING AND CLINICAL PATHWAYS:

COMBINING THE TOOLS TO CHANGE PHYSICIAN RESOURCE

UTILIZATION

by

Earl Glendon Greenia

A Dissertation Presented to the


FACULTY OF THE GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(PUBLIC ADMINISTRATION)

December 2004

Copyright 2004 Earl Glendon Greenia


Table of Contents

List of Tables v

Abstract vii

Chapter I. Formulation and Definition of the Problem


Introduction 1
An Overview of Quality in Health Care 3
Need for the Study 5
Purpose of the Study 8
Definitions 9

Chapter II. Overview of Hospital, Intervention and Study Design


Overview of Study Hospital 11
Overview of the Intervention 13
Physician Profiling 13
Benchmarking 16
Clinical Pathways 17
All-Patient Refined Diagnosis Related Groups 17
Profile Development in the Study Organization 18
Pathway Development in the Study Organization 20
Intervention Dissemination 22
Overview of the Study Design 23
Selection of Diagnostic Groups 23
Experimental Group 24
Control Group 25
Comparability between Groups 26
Study Population 27
Summary 27

Chapter III. Literature Review and Hypothesis Development


Introduction 28
Changing Physician Behavior 29
Social Learning Theory 30
Introduction to Innovation Diffusion 33
Feedback 34
Clinical Audits 34
Physician Profiling 36
Benchmarking 39
Communication 43
Clinical Pathways as Communication 43

ii
Profiles and Pathways as Innovation 47
Relative Advantage 50
Profiles and Clinical Outcomes 50
Pathways and Clinical Outcomes 51
Relative Advantage 52
Time and the Rate of Adoption 52
Physician Leaders 54
Social Identity 54
Organizational Citizenship Behavior 55
Complexity 58
Summary 59

Chapter IV. Methodology


Introduction 60
Review of Previous Methods 60
Statistical Design 64
Unit of Analysis 65
Data Elements 65
Assumptions 67
Data Preparation 68
Data Analysis 69
Hypothesis 1a 69
Hypothesis 1b 69
Hypothesis 2 69
Hypothesis 3 70
Hypothesis 4 70
Hypothesis 5 71
Hypothesis 6 71
Multiple Regression Analysis Model 72
Summary 73

Chapter V. Findings
Descriptive Analysis 74
Multiple Regression Analysis 77
Hypothesis 1a 81
Hypothesis 1b 82
Hypothesis 2 82
Hypothesis 3 83
Hypothesis 4 85
Hypothesis 5 85
Hypothesis 6 86
Summary 87

iii
Chapter VI. Discussion
General Limitations 88
Use of Profiles and Pathways 89
Rate of Adoption 91
Cultural Integration and Physician Leadership 91
Complexity 93
Future Research 93

Bibliography 95

Appendices
Appendix 1, Text of cover letter that accompanied the profile 103
Appendix 2, Text of User Guide 104
Appendix 3, Sample Profile Report 105

iv
List of Tables

2-1. Experimental APRDRG Group 24

2-2. Control APRDRG Group 25

2-3. Size of Control and Experimental Group 26

5-1. Experimental APRDRGs, Pre and post intervention, Length of


Stay & Total Charges 74

5-2. Control APRDRGs, Pre and post intervention, Length of Stay and
Total Charge 75

5-3. Readmissions and Complications, Experimental and Control


Groups 75

5-4. Control Group Examined for Spill-Over Effect 76

5-5. Correlation Table 77

5-6. Average Length of Stay as Dependent Variable, Experimental 78


Group

5-7. Average Total Charge as Dependent Variable, Experimental Group 79

5-8. Average Length of Stay as Dependent Variable, Control Group 80

5-9. Average Total Charge as Dependent Variable, Control Group 81

5-10. Experimental APRDRGs, Pre and post intervention, ANOVA 82


Tests
v
5-11. Control APRDRGs, Pre and post intervention, ANOVA Tests 82

5-12. Experimental APRDRGs, Chi-Square Test, Complications and


Readmissions 83

5-13. Control APRDRGs, Chi-Square Test, Complications and


Readmissions 83

5-14. Experimental Group, Average Length of Stay by Quarter 84

5-15. Experimental Group, Average Total Charge by Quarter 84

5-16. Control Group, Average Length of Stay by Quarter 85

5-17. Control Group, Average Total Charge by Quarter 85

5-18. Average Length of Stay, Opinion leaders v Non-Opinion leaders,


1995 v 1996 86

5-19. Average Total Charge, Opinion leaders v Non-Opinion leaders,


1995 v 1996 86

5-20. Summary of Hypotheses and Findings 87

vi
Abstract

Published studies on the use of clinical pathways and physician profiling to

change physician behavior have demonstrated varying impact. Researchers have

suggested combining various approaches and tools, but have not evaluated

combination interventions. This study contributes to the literature by applying

various theories to evaluate physician profiling, benchmarking, and pathway

dissemination in a community hospital.

The population for the study included physicians who cared for patients

within targeted diagnostic groups (APRDRGs) during calendar year 1996, with the

prior year used as the baseline. The experimental group consists of 10 APRDRGs.

To ensure consistent comparison, only physicians who provided care in both 1995

and 1996 were included in the analysis. In 1995, there were 256 physicians in the

experimental group caring for 3,944 patients. In 1996, these same physicians

provided care to 3,178 patients. The control group consists of 10 APRDRGs. In

1995, there were 246 physicians in the control group and 1,377 patients. In 1996,

there were 213 physicians and 1,018 patients.

One-way analysis of variance, Student’s-T and chi-square tests were used to

examine differences in means for resource utilization in both the pre and post

intervention for physicians receiving intervention and those not receiving, physician

leaders and non- leaders, and clinical outcomes. Regression analysis was used to

examine effect of cultural integration and pathway complexity.

vii
The results suggest that the combined dissemination of physician profiles

and clinical pathways may change physician behavior. Specifically, overall length

of stay and total charges declined for physicians when provided the intervention.

There were no significant changes in readmission and complication rates between

the pre and post intervention periods.

The hypothesis that resource utilization patterns would be lower for

physician leaders than non-leaders was not supported. Nor was the hypothesis that

resource utilization would be lower for those physicians who were more culturally

integrated into the organization versus those less integrated.

The hypothesis that the less complex the clinical pathway, the greater the

reduction in resource utilization patterns was supported. Support was also provided

for the hypothesis that resource utilization patterns will decline over time.

The results also suggested that providing profiles and guidelines for a

specific set of diagnoses and procedures may not have a beneficial spillover effect

on diagnoses and procedures that differ in clinical nature.

viii
Chapter I

Formulation and Definition of the Problem

Introduction

Over the last two decades the health care industry has been radically

transformed by the federal government’s reaction to increasing health costs that

have exceed inflation and threatened its capacity to provide continued coverage for

the indigent and elderly. One of the most dramatic changes occurred in 1983 when

the Health Care Financing Administration replaced Medicare’s traditional

fee-for-service reimbursement structure with a prospective payment system. This

challenged the belief, long propagated by physicians and generally accepted by

others, that permitting the doctor to serve exclusively as the agent of the patient best

protects the patient's interests.

In replacing retrospective reimbursement with prospective payment, the

Federal government, in consultation with the medical profession, created Diagnosis

Related Groups (DRGs). This system assigns patients to mutually exclusive groups

based on clinical factors such as medical diagnosis, operative procedure, co-

morbidities and complications. Under the retrospective payment system, hospital

payment was based on actual costs; with DRGs they are paid a fixed amount

regardless of cost. Thus, the financial incentive for hospitals changed from

providing more care to providing less care.

1
With this shift, cost control became one of the most important management

strategies that differentiated successful from unsuccessful hospitals. Beginning in the

early 1990s, hospital executives implemented a variety of strategies to address

escalating expenses, and recognized that reducing length of stay was paramount in

containing costs (Cleverley and Harvey 1992).

It is the practice of physicians, through ordering tests and treatments, which

largely determines the financial success of the hospital in a managed care environment.

Physician decisions directly and indirectly influence the cost of care. Thus, one

way to reduce the cost of care is to influence the physician’s decision-making

process.

Methods to influence physician behavior and reduce length of stay have

included utilization review, benchmarking, physician profiling, and the use of clinical

pathways. Experts have recommended combining these methods, since traditional

strategies, such as continuing medical education conferences, have had little direct

impact on changing professional practice (Davis, et al., 1995).

The managed care environment continues to pressure physicians and hospitals

to reduce the cost of care without negatively effecting clinical outcomes. One way to

meet this challenge is to reduce unnecessary care or services provided to patients

during their hospital stay. Many institutions have implemented clinical pathway

programs designed to enhance physician awareness of best practices, with the goal of

reducing unnecessary treatment and ultimately costs. Similarly, physician profiles

have been used to make physicians aware of how their practice patterns impact cost by

2
comparing their performance to their colleagues. The use of feedback and profiling is

based on the observation that physicians usually know little about their aggregate

resource consumption patterns and even less about their peers. The rationale for

providing feedback and profiles is based on the assumption that physicians have a

strong professional motivation to conform to generally accepted practices and

provide care in a manner similar to peers.

An Overview of Quality in Health Care

Despite uncertainty about how to define and measure clinical quality,

interest in quality management and outcomes remains keen. The increasing focus

on quality stems from recognizing that value is only achieved by balancing quality

with cost. Measuring, monitoring, and improving outcomes have broad

implications for hospitals in managed care markets. To better understand the

research, a brief overview of quality management follows.

Donabedian (1980) argued that quality could be evaluated based on

structure, process, and outcomes. Structure encompasses physical factors, such as

buildings; and professional and institutional factors, like the regulatory and

financing environments in which care is delivered. Process refers to the actions that

health-care providers take to deliver care, such as performing examinations,

ordering tests, and prescribing medications. Outcomes are the end result of the

process interventions; i.e., the effect on the patient's health.

3
Much of the current focus is on exploring process and outcome measures.

There are advantages to using process measures instead of outcome measures for

performance evaluation purposes. It is easier for healthcare providers to accept

responsibility for their actions in providing care rather than for their patients'

outcomes, because there are numerous uncontrollable factors that affect outcomes.

Process measures are also useful in evaluating the quality of care for chronic

conditions for which the final outcome may take years to determine, such as

congestive heart failure or pulmonary disease. Thus, it is convenient to concentrate

on process measures rather than outcomes measures for performance measurement.

However, there are several clinical outcomes measures that are relatively easy to

obtain, such as readmission rates, nosocomial infections, and surgical

complications.

Quality management is a structured, systematic process for creating

organization-wide participation in planning and implementing continuous

improvement. The science of quality management is a diverse collection of

concepts and tools developed in the fields of statistics, engineering, operations

research, management science, market research and psychology. Quality

management concentrates on changing complex systems and processes in order to

continually improve organizational services and outputs. There are four dimensions

required for a successful program: 1) the cultural dimension, 2) the technical

dimension, 3) the strategic dimension, and 4) the structural dimension (Donabedian

1980). The cultural dimension refers to the underlying beliefs, values, norms, and

4
behaviors of the organization that support continuous quality improvement (CQI)

efforts. The technical dimension refers to the extent to which employees have been

trained in CQI tools and group decision-making processes that support improvement

efforts. A structured problem-solving approach that incorporates statistical methods

to diagnose problems and measure progress is essential. The strategic dimension

refers to the extent to which the organization’s improvement efforts are focused on

key priorities, with emphasis on the link between the improvement efforts and the

organization’s fundamental business objectives. Lastly, the structural dimension

refers to inter-organizational entities, such as top leadership, project teams, task

forces, work groups, and reporting mechanisms. This dimension integrates the

cultural, technical, and strategic dimensions.

Need for the Study

Healthcare organizations have suffered a steady decline in operating margins

in recent years while facing increased competition and pressure to provide higher

levels of customer service, quality of care, and innovation in delivery. The ability

to rapidly find, evaluate, and implement change that will lead to strategic

improvement is critical.

More than half of the physicians in the United States are subjects of either

clinical or economic profiling (Emmons and Wozniak, 1994). Presenting such peer-

comparison information feedback to physicians, attempts to stimulate consensus on

treatment alternatives and allow them to make better-informed decisions about

5
resource inputs. Even though the results of before-and-after studies on profiling vary,

profiles are widely used as an information feedback mechanism. Unfortunately, most

studies on profiling have serious methodological limitations that restrict the strength of

their conclusions (Epstein 1991).

Further, there have been few investigations that analyze the difference in the

effectiveness of profiling under different circumstances. Epstein (1991) argues the

need to identify factors that determine the effectiveness of different interventions,

given the complexity of changing physician behavior. The Physician Payment Review

Commission (1992) found that most profiling studies were limited to the use of a

specific treatment or service, such as laboratory tests or pharmaceutical agents, instead

of examining all services associated with a clinical encounter. The issue of high costs

is closely associated with the provision of multiple services that may or may not be

related. In their meta-analysis of randomized results, Balas, et al., (1996), discovered

that while some randomized clinical trials of information feedback have been

successful in changing practice patterns, several other trials indicated inconclusive or

non-significant results. The randomized, controlled trial literature suggests that

profiling can produce a modest, but statistically significant effect on changing

physician behavior (Kim, et al., 1999).

Clinical pathways and physician profiling have become popular tools for

changing physician utilization patterns. To date, the published studies demonstrate

varying impact on ability to improve clinical care. Spoeri and Ullman (1997) argue

that the need for profiling will continue for two reasons: there will be continued

6
pressure to reduce healthcare costs, and reluctance to micromanage physician

decisions about clinical resource use. It is clear that further study is required on the

effects of modifying important characteristics, such as the content, source, timing,

recipient, and format of feedback.

While researchers have suggested combining various approaches, they have

not been able to empirically ascertain the best mix of complementary interventions

(Thomson, et al., 1999). A meta-analysis performed by Bero (1998) suggests that

passive dissemination of information and small-dose education are generally

ineffective, but guideline dissemination is effective. He found disparate results for

any single tool and concluded that the use of multiple tools is more effective.

Thompson (1997) argues that performance measurement is most useful when

used as a formative tool as part of a broad set of quality-improvement activities.

However, none of the existing research has examined the impact of a combined

program consisting of profiling, benchmarking, and clinical pathways. Goldfield

(1999) believes significant medical leadership and support is critical in gaining

physician acceptance. Here too, no studies have examined whether physician

participation in the development of such programs impacts the effectiveness of the

intervention.

Lastly, little of the existing research examines physician profiling or the use

of clinical guidelines from any major theoretical perspective. This research draws

from social learning, diffusion, social identity and organizational citizenship

behavior theories to develop a framework to review these interventions.

7
Purpose of the Study

This study contributes to the healthcare management literature by evaluating a

comprehensive intervention implemented at a 350-bed community hospital in southern

California. A pragmatic goal of this research is to assist hospital administrators with

implementing similar programs. Additionally, this study attempts to address some of

the weaknesses of previous research identified in the previous section. Specifically,

this study examines:

1. All services (inputs) associated with the hospital encounter, as measured by

total charges.

2. Regular dissemination of profiles and pathways over the long-term. (As will

be demonstrated in chapter three, most of the existing studies focus on

relatively short periods of time).

3. Changes in both clinical outcomes measures resource utilization metrics.

4. Some important characteristics of profiling and pathways, such as the content,

source, timing, and format.

5. The role of physician leaders in developing and disseminating profiles and

pathways.

6. The impact of a combined program consisting of profiling, benchmarking, and

clinical pathways.

7. Profiling and clinical pathways from different theoretical perspectives.

8
This empirical study seeks to address the following research questions:

1. Do resource utilization patterns (as measured by length of stay and total

charges) differ between physicians who receive profiles and clinical

pathways versus those who do not receive profiles and clinical pathways?

2. Is there a difference in outcomes (as measured by readmission and

complication rates) between physicians who receive profiles and clinical

pathways versus those who do not receive profiles and clinical pathways?

3. Do resource utilization patterns differ between those physicians culturally

integrated in the organizational culture versus those physicians who are

not as culturally integrated?

4. Do resource utilization patterns differ between physician leaders who

receive profiles and clinical pathways versus the non-leader physicians

who receive profiles and clinical pathways?

5. Is there is an adoption rate for physician use of profiles and clinical

pathways? (As measured by changes in resource utilization over time).

6. Does the complexity of the clinical pathway impact acceptance and use of

the tool, as measured by resource utilization.

Definitions

Key terms defined for the purposes of this study:

1. Benchmarking: the comparison of a particular process or outcome against

an identified best practice.

9
2. Clinical Guideline: systematically developed statements regarding

preferred clinical management strategies used to educate or assist

physician decision-making under specific clinical circumstances.

3. Clinical Pathway: disease or procedure-specific operational guidelines

that provide recommendations for delivery of clinical care, displayed by

day of hospitalization in a modified Gantt chart format.

4. Feedback: the provision of clinical and administrative data to physicians

about their own practice and outcomes.

5. Information Sharing: the distribution of clinical pathways to physicians.

6. Physician Leader: Physicians who held leadership roles (elected medical

committee member, elected department chair, appointed medical

directors), at any point during the study, actively participated in the

profile or pathway development process or were identified as influential

by the organization.

7. Physician Profile: physician-specific reports with patient-level and

procedure-specific detail, outlining precisely how physicians vary from

their peers in the way they use hospital resources to provide care, and

select outcome measures.

10
Chapter II

Overview of the Hospital, Intervention, and Study Design

Overview of Study Hospital

The study organization is a 350-bed community hospital located in southern

California. It is a full-service general acute care hospital with an emergency room.

Although not affiliated with a medical school, it has a small family practice residency

program. The organization discharges approximately 11,000 patients from the

inpatient setting in any given year. The medical staff consists of approximately 750

physicians; of these, about 200 admit over 90% of the patients. The hospital has

existed for over eighty years and is one of two in the city; there is fierce competition

between the two organizations.

There is a high rate of managed care penetration, with approximately

seventy-percent of all patients in a managed care program of some sort. Further, the

hospital and a large affiliated medical group (an independent-practice association)

were early participants in the Medicare capitation/risk-sharing agreements. In this

arrangement, both the hospital and the medical group benefit financially when costs

are held below the payments received. Over time, this partnership has fostered a

shared vision between the hospital and medical group to aggressively manage the

cost of caring for nearly 13,000 capitated members.

The organization enjoys a long and successful history with implementing

CQI programs, and uses an interdisciplinary approach when implementing

11
improvement projects. The quality management department consists of a director

who is a registered nurse, two quality assurance specialists (both registered nurses),

and a master’s prepared decision support analyst. These individuals provide

technical and facilitation support for all quality projects. The Quality Outcomes

Committee, a medical staff committee with ex-officio members from the executive

management team, identifies and prioritizes opportunities and sanctions the

initiation of all projects.

Davis, et al. (1995) found that the results of variance analyses, along with

length of stay and charge data, when presented to demonstrate the degree to which

resource utilization can be standardized, can positively impact the bottom line. Over

the past ten years, reduction of unnecessary variation has slowly gained acceptance as

a technique to reduce length of stay and hospital charges, while maintaining quality,

and offers considerable advantages in the managed care environment.

In response to declining revenues and increasing costs, hospital

administration asked the Quality Management department to develop and

implement a physician-profiling and clinical pathway program. The project was

named, the “Best Practices Initiative.” Senior administrative and medical leadership

believed that the program of profiles and pathways was compatible with the long-

standing acceptance of continuous quality improvement of the organization. The

project was given status as a strategic priority and monthly status reports were

provided at several medical staff department meetings where peer-review occurred

(Internal Medicine, Family Practice, Surgery, Obstetric & Gynecology, and

12
Pediatrics), as well as committees that dealt with broad functional issues (Quality

Outcomes, Utilization Management, Pharmacy & Therapeutics, Medical Executive,

and the Governing Board).

The project co-directors (the director of the quality management department

and the master’s prepared decision support analyst) reviewed the relevant literature

to determine critical successful factors for implementing such a program. They

selected the organization’s top-10 (in terms of volume) diagnosis groups for

inclusion in the program.

Overview of the Intervention

Physician Profiling

Medical practice profiling has gained prominence in recent years as insurance

companies, managed care organizations, and government agencies have used and

promoted this method of analyzing resource utilization (Brand, et al., 1995). Physician

profiling focuses on patterns of care rather than specific clinical decisions; the data

helps identify and characterize differences in practice style to which individual

physicians or hospital staffs can respond. Profiling is not based on rigid rules; it can

accommodate legitimate exceptions in which the appropriateness of clinical decisions

is judged separately (Welch, et al., 1994). Further, profiling can play an important role

in performance assessment, utilization review and quality improvement (Lasker, et al.,

1992).

13
A typical profiling report examines both resource measures, such as length of

stay and ancillary charges, as well as outcomes measures, like readmission, mortality,

and surgical complication rates for patients treated for a specific illness during a fixed

time frame, usually one year. The primary goal of profiling is to make physicians

aware of how their practice impacts cost by comparing their performance to their

colleagues.

Kongstvedt (1996) states that the most important use of profiles is producing

feedback to assist the physicians with understanding and modifying their practice

style. He summarizes key functions of profiling:

1. Health plans can use profiles and other information to make decisions

about including or excluding physicians from their network.

2. Medical groups and health plan can use the profiles to allocate bonuses

or risk-pool incentive funds.

3. Profiles may be used to provide intangible rewards, like exempting

physicians with favorable profiles from utilization review.

4. Profiles can be used to compare, or benchmark, physicians.

5. Profiling can be used to identify physicians with low-cost, high-quality

outcomes, and disseminate these practices, in the form of guidelines or

pathways, throughout the organization.

Of these key uses, the hospital’s administrative and medical leadership

committed to using profiles to compare and benchmark physicians and to identify

14
physicians with low-cost, high-quality outcomes. The goal was to study, develop

and disseminate practice guidelines throughout the organization. The leadership

group also agreed to not use resource utilization data in the credentialing and

reappointment process; that is, physicians did not need to fear that they would be

removed from the staff if their practice patterns were unfavorable when compared to

their peers.

Kongstvedt (1996) offers several suggestions for designing profiles, criteria he

believes are critical in obtaining physician acceptance. These are summarized below:

1. Feedback must be consistent and understandable.

2. Providing regular and accurate data is vital to changing behavior.

3. Frequent and regular contact will help create an environment for positive

change.

4. Data must encompass an adequate time period.

5. Reports should be no more than one or two pages in length.

6. Graphics should be used to convey large amounts of data.

Further, Kassirer (1994) theorized that feedback is likely to be more

successful when it is individualized, clinically specific, close in time to the

behavior, targeted to the correct physician, and when there is an agreed practice

norm.

15
Benchmarking

Benchmarking is the comparison of a particular work process metric or

outcome internally or against other organizations such as the top competitor, functional

leader, or even to an unrelated industry. The purpose is to identify best practices or a

“gold standard,” with the goal of setting competitive performance measure levels to

surpass.

Kongstvedt (1996) states that profiles are of limited utility unless the results are

compared with some type of standard. The most common way of comparing results is

to provide data for the individual physician in comparison to one or more of the

following:

1. Hospital average results – this is a simple average for all practitioners

within the organization, and the least sophisticated.

2. Specialty or peer group – this compares the practitioner within their own

specialty.

3. Peer-group, adjusted for severity – this is the most complicated approach,

but as described earlier, the most meaningful, and the method most likely

to be accepted by the physicians.

For clinical functions, there are many potential, ready-made networks of

people with similar problems and interests (Camp and Tweet 1994). Hospitals and

physicians could compare themselves to similar institutions or providers, competitors,

or the best in the industry on a severity-adjusted DRG-by-DRG basis.

16
Clinical Pathways

Clinical pathways are an extension of the critical path method; they are

operational versions of guidelines that attempt to explicitly define and codify

diagnostic and treatment processes. They are planning tools that specify the use and

timing of procedures in relation to the patient's recovery; most have tables of

treatments and medications, by day, displayed in a Gantt chart format.

Once low-cost, high-quality practices are identified, organizations can develop

clinical pathways based on these practices, disseminate the pathways to other

providers (Bernstein 1998), and benchmark all providers against best practices. Many

believe that adherence to such a pathway can reduce variation in clinical management

and improve clinical outcomes while reducing the average length of stay and

associated costs.

All-Patient Refined Diagnosis Related Groups

It is important that the data used in profiles be severity adjusted for medical

and non-medical factors known to affect clinical performance, and that sufficient

numbers of events be measured to ensure that differences are not due to chance

alone (Physician Payment Review Commission, 1992; Orav, et al., 1996). Salem-

Schatz, et al. (1994) caution that failure to adjust for case mix in physician practice

profiles may lead to overestimates of variation and misidentification of outliers; if

unadjusted practice profiles are used for decisions about education, sanctions, or

employment, physicians may be subject to inequitable decisions and actions.

17
Further, doctors who believe they are providing high-quality care are unlikely to

accept evidence to the contrary, unless the severity of illness is considered.

Fortunately, there are a number of case-mix adjustment techniques that permit the

comparison of severity and outcomes by factoring co-morbidities, age, and pre-

existing conditions.

The study site used the all-patient refined diagnosis related groups (APRDRG)

system developed by the 3M Corporation. The methodology is similar to the DRG

system used by the Medicare program, with some significant differences; most

notably, the APRDRG system calculates a patient illness severity score. Thus, the

APRDRG methodology provides a sophisticated tool for comparing risk-adjusted

consumption of resources. It uses ICD-9 codes of primary and secondary diagnoses,

the interaction of secondary diagnoses, co-morbidities, age, and non-operating room

procedures to calculate a severity value. The method assigns patients to one of four

discrete complexity of illness values: 1 (Minor), 2 (Moderate), 3 (Major), and 4

(Extreme). A high complexity of illness is primarily determined by the interaction of

multiple diseases. This tool, used nationally, reduces noise due to patient factors and

affords better between-physician comparisons.

Profile Development in the Study Organization

The project co-directors initially met with each elected medical department

chairperson and medical executive committee member to explain the rationale for

the program and to seek their advice on methods to implement the program. Next,

18
the project co-directors drafted a profile format. Charge and clinical data were

extracted from the hospital’s medical record and patient billing systems into an

Excel spreadsheet for manipulation and report creation. The profiles used simple

averages (arithmetic mean) to compare the physician’s resource utilization and

outcomes (observed) against case-mix and severity-adjusted averages of all other

physicians (expected). Because outliers can obscure the relationships and create noise,

the organization removed all length of stay outliers from the database used to generate

the profiles. An outlier was defined as a discharge where the length of stay was

greater than the average length of stay plus three standard deviations for the particular

APRDRG; this is similar to the method used by the Medicare program. Extreme cases

(complexity of illness of 4) were also excluded from the profile because a high

complexity of illness is primarily determined by the interaction of multiple diseases.

Nearly all of the length of stay outliers had a complexity of illness of 4.

Numerous measures were considered for the profile by a small team of

administrators and key physician leaders. The goal was to cover the broad spectrum

of hospital services for which the attending physician was responsible. Several

iterations were proposed to key executives, elected physician leaders, medical

directors and other respected members of the medical staff. After demonstrating the

utility (format) and accuracy (content) of the report, the profile was approved by the

medical executive committee.

The final product contained graphical depictions of ancillary resource

utilization that were severity adjusted, as well as outcomes measures rates such as

19
infections, complications, readmissions, and death. The profile was disease or

procedure-specific, and detailed physician-specific resource utilization and clinical

outcomes. The specific measures that will be reviewed in this study are:

1. Average length of stay, and

2. Average total charge, and

3. Readmission rate, and

4. Surgical complications rate.

Pathway Development in the Study Organization

Each of the ten APRDRGs was treated as a separate performance improvement

project, and also identified by hospital administration as a strategic initiative. For each

APRDRG, a multi-disciplinary team was appointed to develop a clinical pathway.

Teams were given the charge of balancing the benefits of standardization with the

physicians’ prerogative to make decisions tailored to individual patient care needs.

Each team had at least two physicians: there was at least one “best practice” physician

and one physician leader (department chair, medical director, or officer of the medical

staff executive committee). The project co-directors served as facilitators for each

team.

The teams began the process by reviewing literature relevant to pathway

development, current scientific literature for the particular disease or procedure, and

pathways developed at other hospitals and by recognized professional societies. The

teams discovered that many existing pathways were complicated, often several pages

20
long, and detailing nearly every aspect of care. Physicians on the teams expressed

concern that long, complicated tools may be ignored and suggested that more

simplistic tools be considered. They also expressed that if the pathway was not “home

grown,” acceptance by their colleagues was unlikely.

The team developed pathways based on the practice of those physicians with

low-cost, high-quality outcomes (best practice physicians). This required copious

review of medical records, and was time intensive. The tool was based on the care

provided to patients with a complexity of illness of 2 and 3 (approximately 85% of the

patients). Those with a complexity of one were excluded because variability in

treatment was much less pronounced than those with a severity of 2 or 3. Similarly,

the cases with a complexity of 4 were excluded because the patients often present with

unique combinations of co-morbidities that would prohibit the development of a

clinical pathway.

Here too, there were several iterations of the pathway. The final product was

subject to review and approval by the appropriate medical staff committees. In the

end, each team developed a relatively simple pathway that highlighted only the most

critical aspects of care. It was a grid format, the columns were the day of treatment,

and the rows the key aspect of care (such as a medication, respiratory treatment, or

diagnostic test). An “X” was placed at the intersection, denoting the day that the key

aspect should occur. The final product was unique; the pathway was not like any of

those developed by other organizations.

21
Intervention Dissemination

Every physician who took care of any patient falling into one of the

treatment APRDRGs received a two-page report (by mail) profiling their practice

patterns. To facilitate physician use and understanding of the report, a one-page

“user’s guide” was also included. Accompanying each profile was the clinical

pathway based on the collaborative efforts of the multidisciplinary team for each

particular APRDRG. Because both the profile and pathway were specific to the

APRDRG, it was possible that a physician could receive several reports.

To ensure that the medical staff understood the program, a cover letter

signed by the chief executive officer and chief medical officer, explained the

purpose of the program. Specifically, the letter commented, “If we are to continue

serving our community, we must use our resources as effectively and efficiently as

we can. If we do this, the hospital will reduce the cost of care, while improving

quality. We believe that development and implementation of such things as best

practice guidelines, clinical pathways, and suggestions you may see when

comparing your information to your peers, play an important role in dealing with

these challenges.”

The initial report covered a 12-month period; it was mailed in January 1996,

and covered calendar year 1995 (the pre-intervention period). The study co-

directors believed if regular dissemination of the reports communicated a relative

advantage (reduced costs and improved outcomes), acceptance would increase over

time. Thus, another set of reports were distributed six months later, (sent out in July

22
1996, covering January-June 1996); again in October (covering January-September

1996), and again in January 1997 (covering January-December). This provided

physicians with the ability to regularly monitor their practice patterns, and

reinforced the intervention. It is important to note that the cover letter that

accompanying the July mailing reported that the program resulted in a cost

avoidance of one million dollars. A sample of the intervention packet can be found

in the appendix.

Overview of the Study Design

Selection of Diagnostic Groups

Consistent with the basic principles of quality management and statistical

process control, the project directors wanted to be able to demonstrate whether the

program was an effective method to change physician practice patterns. If the

program was successful, it would be expanded to cover a larger number of

diagnoses. Thus experimental and control groups were established.

In selecting the APRDRGs for the control and experimental groups, the

following conditions were established:

a) There were at least 50 cases in the baseline year (1995),

b) The mix represented a variety of specialties; i.e., cardiology, surgery,

pediatrics,

c) There were ten APRDRGs assigned to each group, and

23
d) To minimize halo or spillover effects, APRDRGs selected for the control

group were clinically different from the experimental group based on a

higher level grouping, known as Major Diagnostic Category (MDC).

Experimental Group

Ten APRDRGs are included in the experimental group. To ensure

consistent comparison, only those physicians who provided care in both 1995 and

1996 were included in the study. This exclusion eliminated only 43 cases. In 1995,

there were 256 physicians in the experimental group caring for 3,944 patients. In

1996, these same 256 physicians provided care to 3,178 patients. The APRDRGs

selected for study and the corresponding MDC are listed in table 2-1.

Table 2-1. Experimental APRDRG Group


APRDRG Description Major Diagnostic Category
Cerebrovascular Disorder, Excluding
14 TIA Nervous System
Chronic Obstructive Pulmonary
88 Disease Respiratory System
89 Simple Pneumonia & Pleurisy Respiratory System
96 Bronchitis & Asthma Respiratory System
127 Heart Failure & Shock Circulatory System
Musculoskeletal System & Connective
209 Major Joint & Limb Procedure Tissue
358 Uterine & Adnexa Procedures Female Reproductive System
370 Cesarean Delivery Pregnancy, Childbirth & Puerperium
372 Vaginal Delivery Pregnancy, Childbirth & Puerperium
Musculoskeletal System & Connective
757 Back & Neck Procedures Tissue

24
Control Group

An equal number of APRDRGs with similar costs and lengths of stay define

the control group. The project directors were concerned with two competing issues:

halo effect and volume. To ensure sufficient volume, the APRDRGs selected for

the control group, were those ranking in volume just below the experimental group.

However, to minimize possible spillover, the control group APRDRGs were

selected from different MDCs. Three high volume APRDRGS were removed, and

replaced with the next three from the list. Thus, the experimental group represented

the organization’s highest volume APRDRGs; and the control group were the next

highest in volume that were different in clinical nature. The control group

APRDRGs and the corresponding MDC are listed in table 2-2.

Table 2-2. Control APRDRG Group


APRDRG Description Major Diagnostic Category
Ear, Nose, Mouth & Throat
63 Procedures Ear, Nose, Mouth & Throat
174 GI Hemorrhage & Perforation Digestive System
182 Gastroenteritis & Abdominal Pain Digestive System
188 Digestive System Diagnoses Digestive System
277 Cellulitis Skin, Subcutaneous Tissue & Breast
296 Nutritional & Metabolic Disorders Endocrine, Nutritional & Metabolic
320 Kidney & Urinary Tract Infections Kidney & Urinary Tract
323 Urinary Stones Kidney & Urinary Tract
Blood, Blood Forming Organs,
397 Coagulation Disorders Immunology
787 Laparoscopic Cholecystectomy Hepatobiliary System & Pancreas

In 1995, there were 246 physicians in the control group caring for 1,377

patients. In 1996, there were 213 physicians in the control group caring for 1,018

25
patients. Between the two years, there were 248 different physicians. A limitation

of the study is the inability to fully control for possible halo or spillover effects

since 148 of the 246 physicians received reports on APRDRGs in the experimental

group, although they did not receive a report on those APRDRGs in the control

group. The size of these groups is summarized in table 2-3.

Table 2-3. Size of Control and Experimental Groups


1995 1996
Control Group Physicians 246 213
Patients 1,377 1,018
Experimental Group Physicians 256 256
Patients 3,944 3,178

Comparability between Groups

An apparent weakness of the selection of the DRGs for each group is

volume; there are nearly three times as many patients in the experimental group

versus the control group. This difference could not be corrected, the experimental

group represented the organization’s highest volume DRGs and the control group

was the next highest in volume. Given the significant difference in volume, a power

analysis was conducted to ensure that the number of discharges in each group was

large enough to detect significant difference at an alpha level of 0.05. The power

was determined to be 0.87; generally a power greater than 0.80 is considered to be

good, and the concern for sufficient numbers in each group was satisfied. Further,

the limitation is offset, given that there are nearly an equal number of physicians in

each group. Lastly, the baseline (1995) resource utilization statistics are

26
comparable, and not statistically significant: the average length of stay for patients

in the experimental group was 3.40 days versus 3.38 for the control group; average

total charges were $10,911 (experimental) and $11,937 (control).

Study Population

The population for the study included those attending physicians who

provided care to patients within the targeted APRDRGs during the study period,

January 1, 1996 through December 31, 1996. The prior calendar year served as the

baseline.

All attending physicians practicing in the hospital had the potential to be

included in the study, either as part of the experimental group or the control group.

Hospital-based physicians (pathologists, radiologists, emergency medicine

physicians, and anesthesiologists) are excluded from the study since they do not

typically prescribe patient care (i.e., write physician orders).

Summary

The purpose of this chapter was to provide an overview of the organization to better

understand the context of the intervention. Key elements of the intervention were

summarized to clarify elements of the study design. Other elements of the study

design, including methodology and data analysis will be presented later in this study

once a theoretical framework is established.

27
Chapter III

Literature Review and Hypothesis Development

Introduction

Reducing inappropriate variation in resource utilization is a recognized

strategy to control costs and improve quality. Information sharing and information

feedback are common and relatively inexpensive interventions for changing

resource utilization (Avorn et al., 1992, Balas et al., 1996). One approach is to

share peer-comparison profiles with physicians, assist them with interpretation,

provide benchmarks, and disseminate clinical guidelines or pathways.

However, the results of before-and-after studies on profiling vary, despite

the frequent use of this information feedback technique (Balas et al., 1996).

Provider uncertainty and differing opinions about the value or efficacy of

procedures have been stated as the primary cause for variation in utilization.

Rogers (1995) and others detail several enabling factors that when aligned

with the goal of encouraging innovation diffusion, can significantly increase the

chance for successful evaluation, adoption, diffusion, and sustainability. The

successful adoption of innovations often depends on the network of interpersonal

relationships within a system or an organization. The ability within an organization

to share ideas, observe trials of new ideas, and be influenced by the behavior and

beliefs of trusted individuals all influence successful adoption and diffusion.

28
This chapter reviews the literature to assess the effectiveness of clinical

pathway dissemination, benchmarking, and physician profiling as feedback and

information tools to change behavior. An exhaustive review of health administration

and medical journals was conducted; major topics included: physician profiling,

utilization review, quality management, clinical pathways, and benchmarking. This

chapter also reviews the institutional school of organization theory and diffusion of

innovation theories to provide a robust theoretical perspective. The literature and

research reviewed in this chapter were selected based on theoretical framework,

content, and methodology.

Changing Physician Behavior

The first tenet on which the practice of medicine is built is the sanctity of the

relationship between the patient and the physician, and the physician’s ethical duty

and professional commitment to act in the patient’s best interests. They are

motivated by their formal education, clinical experiences, personal beliefs and

values, economic incentives and influences in their working environment.

Medical practice is characterized by a high degree of uncertainty; cause and

effect relationships are not always clear. This uncertainty arises because the

physician cannot be sure that he or she knows everything about the patient that is

relevant to their diagnosis and treatment. The physician’s preference for diagnostic

certainty may incline them to use more, not fewer tests. Additionally, when faced

with a patient with a particular diagnosis, the physician often has several options to

29
choose from. Physicians must make implicit judgments based on their knowledge,

training, and past experience. These judgments vary widely and are the primary

source of practice variation.

Social Learning Theory

Social learning theory (Bandura 1969, 1971) emphasizes the importance of

observing and modeling the behaviors, attitudes, and reactions of others. Much

behavior is learned observationally through modeling: from observing others one

forms an idea of how new behaviors are performed, and on later occasions this coded

information serves as a guide for action.

Several special features of a physician's background make changing their

behavior a complex process. A physician's background, ethics, and beliefs strongly

shape their opinions and influence their practice behaviors. Individual physicians

clearly differ in their clinical practice styles as a function of their individual nature,

medical training, and clinical experience. Increased understanding of the etiology of

the disease, cause-and-effect relationships, and new technologies all serve to render a

physician’s initial training obsolete. Over the course of their careers, most physicians

will modify their practice styles many times.

Social learning theory can help us understand these modifications. The theory

encompasses attention, memory and motivation and provides a framework to explain

human behavior in terms of continuous reciprocal interaction between cognitive,

behavioral, an environmental influences. The component processes underlying

30
observational learning include attention, retention, motor reproduction, and motivation.

Attention includes modeled events, such as distinctiveness, affective valence,

complexity, prevalence, functional value; and observer characteristics, like sensory

capacities, arousal level, perceptual set, and past reinforcement. Retention refers to

symbolic coding, cognitive organization, symbolic rehearsal, and motor rehearsal).

Motor reproduction includes physical capabilities, self-observation of reproduction,

and accuracy of feedback. Motivation refers to external and self-reinforcement.

Physician behavior is constantly changed during medical school and residency

training through formal and informal exposure to guidelines by program leaders and

department chiefs who serve as thought leaders. Residents may cite or hear cited

position statements and/or guidelines by physician societies to entrench practice

behavior norms. Also during residency training, mentors, supervisors, and peers seek

to mold behavior. Repetitive assessment of values, attitudes, and skills is a part of this

initial training (Cassel et al., 1997; Holmboe and Hawkins, 1998).

After medical school, there are numerous educational opportunities vying

for the practicing physician’s attention. Physicians' regularly receive

advertisements for continuing medical education courses, often combined with

vacation features. In addition, written, audio, or video education courses to

complete at home, by mail, or on the Internet are offered, in hopes of capturing

physicians' limited time for interventions to improve their performance.

Researchers have found evidence that physicians contemplating or adopting

behavior change attend conferences to validate and test the reliability of their

31
learning and behavior, either that of new information and innovations, or that of

what they are already doing in practice (Putnam and Campbell 1989). Passive

education strategies embodied in continuing medical education conferences have

been found to be ineffective (Davis, et al., 1995); research also suggests that printed

materials are ineffective (Freemantle, et al., 1999).

There are three principles from social learning theory that are relevant to this

study:

1. The highest level of observational learning is achieved by organizing and

rehearsing the modeled behavior symbolically and then enacting it overtly.

Coding modeled behavior into words, labels or images results in better

retention than simple observation.

2. Individuals are more likely to adopt a modeled behavior if it results in

outcomes they value.

3. Individuals are more likely to adopt a modeled behavior if the model is

similar to the observer and has admired status and the behavior has

functional value. Further, Rogers (1995) suggests that most individuals

evaluate an innovation, not on the basis of scientific research by experts,

but through the subjective evaluations of peers, and especially opinion

leaders, who have adopted the innovation.

Most professionals today complain of an overload of information, and

physicians are not immune from this overload. With a glut of new ideas flowing

32
across their desks, it may be useful to understand how physicians select new ideas

to experiment with. Rogers’ extensive work on innovation diffusion can provide a

framework.

Introduction to Innovation Diffusion

Diffusionism refers to the point of view in anthropology that explains social

change in a given society as a result of the introduction of innovations from another

society (Rogers, 1995). Since the 1960s, the diffusion model has been applied in a

wide variety of disciplines such as education, public health, communication,

marketing, geography, general sociology, and economics.

Diffusion of Innovation theory provides a framework for understanding the

process of social communication, adaptation, and change within a given social

system. The innovation-decision process refers to the cognitive process in which an

individual or group passes from initial awareness knowledge of the innovation, to

forming an interest in the innovation, to a decision to adopt or reject, to

implementation or experimentation of the new idea, and finally to confirmation or

adoption of the innovation into lifestyle. An individual seeks information at various

stages in order to decrease uncertainty about an innovation's expected

consequences.

The diffusion model suggests that the most important single indicator of

effectiveness is the rate of adoption of an innovation. Rogers proposes five

attributes that determine the rate of adoption: relative advantage, compatibility,

33
complexity, trialability, and observability. There are four constructs in his

framework: characteristics of the innovation, communication, time, and social

system. Several of these attributes and concepts are discussed in more detail in the

remainder of this chapter.

Feedback

This section examines literature and research that evaluates the effectiveness of

using feedback techniques (clinical audits and physician profiling), and information

sharing (clinical guidelines and pathways) to change physician use of hospital

resources.

Clinical Audits

Audit and feedback, which stem from behavioral and learning theories, are

approaches that seek to modify physician performance through external stimuli.

Behavioral and affective theories (Andersen, 1974, 1995) including social cognitive

theory (Bandura 1969, 1971) and the health belief model (Rosenstock, 1974;

Maiman and Becker, 1974) suggest that an individual's behavior change is governed

by his or her goals and perceptions, which are in turn affected by internal and

external forces that may be malleable. These theories hypothesize that feedback of

performance or behavior norms, or compliance reminders can change physician

behavior. Balas et al (1996) concluded that peer feedback has a statistically

significant although small effect on utilization. Cochrane (1999) found that audit

34
and feedback can sometimes be effective, in particular with prescribing medications

and ordering diagnostic tests; however, effects appear to be small to moderate. He

cautions against relying solely on this approach, and argues that complementary

interventions can enhance effectiveness.

Historically, hospitals have used clinical audits as both quality assurance and

utilization review tools to characterize care through the systematic review of a series

of patient experiences. Information is usually obtained by reviewing medical

records for documentation of specific clinical practices. Such audits examine issues

of quality surrounding clinical management of minor acute problems or preventive

health practices, chronic disease management and the use of specialty consultations.

While clinical audits have been widely used to assess performance, the

evidence on their efficacy in modifying physician behavior is conflicting. To date,

there has been no formal synthesis of studies on the use of audits to affect clinical

performance. Many of the studies were not well controlled and most did not

include a strategy for randomizing the physicians who were given feedback.

Rather, most were pre and post evaluation designs, based on interventions

conducted at a single site or with a small number of practices.

A study at one hospital demonstrated significant improvement in the

utilization of preventive health processes when those processes were audited, and no

improvement in those processes that were not monitored (Holmboe et al, 1998).

Two delimited studies, examining the quality of pap smears, demonstrated that

performance of both residents and faculty physicians substantively improved after

35
they received feedback from clinical audits (Curtis, et al., 1993; Norton, et al.,

1997).

The Ambulatory Care Medical Audit Demonstration Project (Palmer and

Hargraves 1996) is the largest formal study of the use of audit information in the

United States. The project was designed as a randomized controlled clinical trial of

the use of quality-improvement techniques to improve clinical performance in

primary care. Although audit information was only one component in this

multidimensional intervention, the study demonstrated that it is possible to improve

quality through audit information feedback.

Other reviews suggested that auditing as feedback has only a small effect on

overall resource utilization (Balas, et al., 1996) but a significant effect on

prescribing drugs and ordering diagnostic tests (Thomson, et al., 1999). This may

be explained by the fact that drugs and tests frequently change, thus doctors are

predisposed to scanning for ideas. While researchers suggested combining

feedback with other approaches, they have not found evidence pointing to the

superiority of any complementary interventions (Thomson, et al., 1999). Additional

study is required on the effects of modifying important characteristics, such as the

content, source, timing, recipient, and format of feedback.

Physician Profiling

Over time, profiles can be utilized to communicate to physicians the relative

advantages of clinical pathway adoption. Drawing from Rogers (1995), physician

36
profiling is more likely to be accepted if it provides a relative advantage (i.e.,

improve quality); consistent with existing values (e.g., not used for economic

credentialing); easy to understand; and produces observable results. Further, given

the volume of managed care patients and the Medicare risk-sharing agreement,

reduction of costs is a clear relative advantage.

Studies of physician profiling as a tool for changing physician behavior

present mixed results. A meta-analysis of randomized trials of profiling found only

12 eligible trials; many of the studies under evaluation had notable design flaws

(Balas, et al., 1996). The analysis found that profiling had a statistically significant

positive effect on utilization. The randomized, controlled trial literature suggests

that profiling can produce a modest, but statistically significant effect on changing

physician behavior (Kim, et al., 1999).

Concerns for small sample sizes, absence of risk adjustment, and reliability of

data collection methods along with other methodological concerns (Balas, et al., 1996)

have resulted in mixed opinions regarding physician profiling as a tool for improving

quality of care and for the mixed results seen in previous studies. In light of pressures

for healthcare reform and skepticism regarding physicians' decision making, it is

unlikely that methodological concerns will dissuade regulators and managers from

expanding scrutiny of physician practice (Massanari 1994).

A study by Hofer et al. (1999), examined the usefulness of physician

profiling for patients with diabetes, one of the most prevalent conditions in clinical

practice. The authors conducted a study of approximately 3,600 patients with type

37
II diabetes, under the care of 232 different physicians. They were unable to reliably

detect any true differences in care among the physicians, as measured by office visit

and hospitalization rates. These utilization rates are rather coarse proxies for

measuring care processes; unfortunately, the article did not describe the assessment

tool in sufficient detail to determine if more sophisticated measures were collected.

The authors highlighted the power problem with their study: a physician would need

to have over 100 diabetic patients for the statistical analysis to achieve an 80%

reliability rate; however, over 90% of primary care physicians in the study had less

than 60 patients with diabetes (Hofer, et al., 1999).

An experimental-control group study was conducted in a large community

hospital to determine the effect of a physician education program on hospital length

of stay and total patient charges (Johnson et al., 1993). The intervention consisted

of a one-time exposure of physicians to clinical and financial information about

their individual practice patterns for the treatment of pneumonia patients. The study

concluded that providing physicians with specific information about their practice

behavior resulted in decreased charges and length of stay. The improvement in

resource utilization was observed for two years following the provision of practice

specific data to physicians in the experimental group.

Johnson and Martin (1996) concluded that physician profiles are effective in

reducing hospital resource consumption in elective total hip replacement. Over a

seven-week period, orthopedic surgeons in the intervention group were given

graphical charts profiling their specific length of stay and average total charges and

38
that of their peers. Length of stay declined from 13.7 days to 9.9 days; charges

were reduced from $22,103 to $18,607; and the variance for each dropped by one-

half or more.

Benchmarking

In 1984, Winickoff, et al., published their study investigating physician

compliance with colorectal cancer screening standards. The standard required a

digital examination and occult blood stool test at annual check-ups for patients aged

40 and older. Three intervention strategies to improve compliance were

implemented during a three and one-half year period: educational meeting,

retrospective feedback of group compliance rate, and retrospective feedback of

individual compliance rate, and retrospective feedback of individual compliance

rates compared to peers. During the first six-month period, physicians receiving

feedback improved from 66.0% to 79.9%. Behavior changes were found to

continue up to twelve months after the intervention.

A study on information-sharing as a tool for modifying physician practice

was conducted by Marton, Tul and Sox (1985) which compared two interventions

by assigning 56 physicians into four groups: a control group; a feedback group,

which received information about their use of laboratory tests; a manual group,

which received an educational manual addressing cost-effective laboratory

utilization; and a manual-plus-feedback group, which received both interventions.

After the introduction of the interventions, physician test use was monitored for

39
seven and one-half months. The study compared mean laboratory charges and

mean number of tests ordered per patient visit per physician for each of the study

groups both before and after the intervention. The study concluded that these

simple techniques could modify physician use of the laboratory, but did not suggest

that one intervention was superior to another.

Berwick and Coltin (1986) conducted a study of physician feedback in a

health maintenance organization. In a crossover design controlled clinical trial,

three interventions were studied on the use of thirteen common blood tests among

thirty-five internists within three ambulatory care centers. Overall use fell by 14.2%

in a 16-week period during which physicians received confidential feedback on

their individual rates of use compared with peers (cost feedback). Eleven of 12 tests

showed some decrease. Similar feedback on rates of abnormal test results (yield

feedback) and a program of test-specific education failed to show a consistent

effect. Variability in rates of test use among physicians, as measured by the

coefficient of variation, fell by 8.3% with cost feedback, by 1.3% with yield

feedback, and by 2.3% with education, but these changes were inconsistent across

tests. This may suggest that either the tests or the diagnosis have characteristics that

make the determination of appropriate use more difficult.

In 1989, Pugh, et al., published their study of a controlled trial to determine

the effect of daily feedback about inpatient charges on physician knowledge and

behavior. The study examined two medical wards in an academic medical center to

determine the effect of providing daily charge feedback information on charges.

40
There was a significant reduction in mean total charges (17%), length of stay (18%),

room charges (18%) and diagnostic testing (20%) in the sub group. The authors

concluded that charge feedback alone is effective in decreasing resource utilization

in a teaching hospital.

Tierney, Miller and McDonald (1990) studied the effect of informing

physicians of the charges for outpatient diagnostics in a primary medical care

practice. All physicians in the study ordered tests from computer workstations. For

the intervention group (half of the physicians), charges for the test being ordered

and total charges for tests for that patient were displayed on the computer. The

control group did not receive messages about charges. The authors found that the

intervention was effective in significantly reducing the number and cost of tests

ordered; however, they noted that the effects did not persist after the intervention

was discontinued.

Frazier, et al. (1991) conducted a prospective controlled trial in an internal

medicine teaching clinic to determine whether an educational program using a drug

cost manual could assist physicians in reducing their patients’ out-of-pocket

expenses for prescription medications. Thirty-one interns received a manual of drug

prices annotated with prescribing advice, two feedback reports, and a weekly

prescribing reminder. The control group of twenty interns concurrently participated

in a manual-based cholesterol management education program. In addition,

feedback reports were generated from the carbon copies of the prescription written

by physicians, and were only distributed to the intervention group. Each report

41
contained the physician’s own data with averages for all physicians in the

intervention group for comparison. It was found that the intervention group

physicians prescribed less expensive drugs within certain drug groups.

Berkey (1994) examined a collaborative benchmarking approach developed by

SunHealth Alliance, in which more than 120 hospitals participated in 15 projects. One

clinical project, involving four hospitals, was focused on reducing the length of stay

and mortality rates for pneumonia patients. Each hospital formed internal task forces,

who reviewed comparative data, analyzed their hospitals' care processes, determined

opportunities for improvement, and chose best practices for developing a clinical

pathway.

Similarly, such sharing of data among hospitals hastened the evolution of

continuous improvement at Voluntary Hospitals of American/Pennsylvania to a focus

on learning from the best. Banaszak (1993) examined two DRGs, appendectomies and

cesarean section deliveries, and found that comparative outcome data showed

significant variation. A study of the benchmarked hospitals showed characteristics,

specific to those institutions, which resulted in reduced resource consumption and

positive clinical outcomes. This quantification of best practices was a catalyst for the

organization to implement a clinical benchmarking project, with the goal to

standardize routine care, reduce variation, and improve financial performance.

42
Communication

Concepts from social learning, innovation, social influence, and power

theories suggest that participatory guidance, where physicians are given the

opportunity to develop norms and strategies and for change will lead to change.

Rogers (1995) defines communication as the process by which participants create

and share information with one another in order to reach a mutual understanding. A

communication channel is the means by which messages get from one individual to

another. Thus, a clinical pathway can also be viewed as a communication vehicle or

channel.

Rogers (1995) espouses that mass media channels are more effective in

creating knowledge of innovations, whereas interpersonal channels are more

effective in forming and changing attitudes toward a new idea, and thus in

influencing the decision to adopt or reject a new idea. It seems reasonable to

suggest that the organization’s dissemination of profiles and pathways can be

construed as a mass media channel.

Clinical Pathways as Communication

Pathways are intended to change behavior by providing definitive

information on best practices from authoritative sources to well-trained, interested,

logical practitioners. Drawing from Rogers (1995), a clinical pathway is more

likely to be accepted if it provides a relative advantage, is consistent with existing

values, is easy to understand, and produces clear results. Relative advantages of

43
clinical pathways might include improved efficiency, such as decreased length of

stay, and improved effectiveness, such as better clinical outcomes.

Weingarten, et al. (1994) evaluated the effects of providing physicians a

practice guideline that recommends consideration of early hospital discharge for

low-risk patients with chest pain. During six intervention periods, physicians

received a structured message posted on patients' charts the day after admission that

conveyed risk information and the guideline recommendation. Use of the practice

guideline recommendation with concurrent reminders was associated with a 50% to

69% increase in guideline compliance and a decrease in length of stay from 3.54 to

2.63. The intervention was associated with a total (direct and indirect) cost

reduction of $1,397 per patient.

At another institution, uncertainty regarding the optimal evaluation of

suspected deep vein thrombosis resulted in wide variations in practice (Pearson et al,

1995). To address variation in practice while maximizing the efficiency and quality of

care, the institution developed a critical pathway guideline for the emergency

department evaluation of patients suspected of having the condition. A

multidisciplinary team reviewed current practice, benchmarked it against other

institutions, and developed the pathway. In its final form, the pathway balanced the

benefits of standardization with the prerogatives of physicians to make decisions

tailored to individual patients.

Computerized clinical outcomes measurement systems are often routinely

available to help physicians and administrators assess resource utilization as well as

44
improve the quality of care. At another institution (Krivenko and Chodroff, 1994), a

physician subcommittee focused on the best outcomes rather than the poorest to

determine the variations in processes of care that might have led to either superior or

inferior clinical outcomes. They learned that each hospital must develop its own

approach to common clinical conditions. These approaches become standardized in

the form of institutional attitudes, beliefs, policies, and procedures – physician

involvement at all stages was critical (Krivenko and Chodroff, 1994).

In reviewing the literature, several cardiac surgery success stories were found.

Andersson (1993) found success with coronary artery bypass graft (CABG) patients at

Scripps Memorial Hospital. They developed clinical pathways for four DRGs in

cardiovascular surgery in order to stabilize those clinical processes, collect data on

them, and make improvements. The result was a 20 to 30 percent decrease in length of

stay and a similar reduction in charges. Barnes, et al. (1994) reviewed the clinical

processes and outcomes at Borgess Medical Center, where they analyzed and

streamlined the processes of caring for a CABG patient. The team used comparative

data, specialty and peer review organization guidelines, medical records, charge data,

and relevant literature to drive the process. One year after the pathways were

implemented, average total charges per patient decreased from $35,700 to $32,700;

length of stay decreased from 11.1 to 9.7 days. At the Medical Center Hospital of

Vermont, the combination of pathways and algorithms for CABG patients resulted in a

reduction of 2.5 days for total length of stay (including 1 day in intensive care), for a

mean cost savings of $3,500. None of these studies included the pathway in the

45
publication, so it is not possible to determine the similarities or differences or to

suggest any relationship between the design and the effectiveness.

Bernard, et al. (1995) examined the use of a feedback system to direct and

monitor physician and hospital practice on general medicine services of an 880-bed

university hospital. For the over 2,000 admissions on both a control service and the

intervention service, the mean length of stay decreased when compared with historic

norms. There also was a trend for the intervention service to have fewer LOS outliers

than expected. Ancillary service use decreased by 17% on both control and

intervention services. Other internal medicine services experienced a 29% increase in

ancillary service use. A major weakness of the study was that it did not incorporate

severity measures into the analysis. Overall, the study suggests that both direct and

indirect interventions can produce temporary change.

Kramolowsky, et al. (1995) determined that physician awareness of hospital

costs for radical retropubic prostatectomy impacted physician practice. They reviewed

256 consecutive prostatectomies performed by fourteen urologists during a four-year

period at a community hospital. Following two years of data collection, the physicians

were provided cost information and factors that may decrease charges. Significant

decreases were noted for charges, length of stay, need for intensive care, and operating

time.

Faced with the closing of its service, the Orthopaedics Department at Mt.

Sinai Medical Center (New York), developed clinical pathways to ensure appropriate

utilization. The service realized a 40% savings in materials, and reduced length of stay

46
by five to six days (Ferdinand 1994). Bristol Regional Medical Center, facing the

challenge of managed care organizations, instituted this process and achieved

significant cost savings, largely because of the working partnership between the

administration and its medical staff. In simple pneumonia, major benchmark or "best

practice" variations were incorporated into new clinical pathways, leading to decreased

resource use (Clare et al., 1995).

Bero (1998) found disparate results for any single tool and concluded that

the use of multiple tools may be more effective. However, the literature search did

not find any studies that evaluated the combined use of physician profiles and

pathway dissemination.

Profiles and Pathways as Innovation

Rogers (1995) defines an innovation is an idea, practice, or object that is

perceived as new. The term innovation does not necessarily refer to the creation of

new ideas or products but to the introduction of previously unknown ways of

providing care and services that may be an improvement over existing methods. In

the study organization, neither physician profiles nor clinical pathways were

previously employed, and thus, it can be reasonably argued, they represent an

innovation.

Based on learning theory and innovation diffusion theory, it seems

reasonable to expect physicians exposed to profiling or pathways to behave

differently (i.e., modify their practice patterns) than those who have not been

47
exposed. However, efforts to implement guidelines to change individual physician

behavior have frequently failed.

Research suggests that simple provision of information, even in the form of

guidelines, is insufficient. A meta-analysis performed by Bero (1998) suggests that

passive dissemination of information and small-dose education are generally

ineffective; but that guideline dissemination is effective. Grimshaw and Russell

(1993) concluded that explicit guidelines improve clinical practice when introduced in

the context of rigorous evaluations; however, the magnitude of the effect varies

considerably. In general, physicians do not like to be told how to practice medicine.

The likelihood adopting pathways can be influenced by several factors: the

scientific rigor of the guidelines used to develop the pathways, characteristics of the

health-care professional (e.g., specialty and number of years in practice),

characteristics of the practice setting (e.g., association with academic medical center

or urban vs. rural location), incentives, regulation, and patient factors (Taylor-

Vaisey, 1997).

There is little existing research that examines the cultural dimension.

According to Donabedian (1980), the cultural dimension refers to the underlying

beliefs, values, norms, and behaviors of the organization that support continuous

quality improvement (CQI) efforts. Rogers (1995) believes another cultural factor

that influences acceptance is compatibility, or the degree to which the innovation is

consistent with existing values and past experiences of adopters. He cautions that

an idea that is incompatible with existing values and beliefs may not be adopted as

48
rapidly as one that is compatible. The diffusion process can be delayed. The

adoption of an incompatible innovation often requires the adoption of a new value

system before accepting the innovation.

As suggested by learning theory, clinical pathways can be seen as

representing a coded version of modeled behavior. An extension of diffusion theory

suggests that for pathways to be accepted then, they must be consistent with existing

values and balance the benefits of standardization with the prerogatives of

physicians to make decisions tailored to individual patient care needs. In the study

organization, the pathways were developed by physicians practicing within the

hospital and not imported from some other organization, so it seems reasonable to

expect that they are consistent with existing values and more likely to be accepted.

Acceptance implies that there are more efficient and effective practices to

treat patients within specified illness or diagnostic groups. This research expects

that there may be varying levels of acceptance occurring among the participants

within their voluntary attitudes and behaviors. Kerr and Hiltz (1982) and Hiltz and

Johnson (1989) found that usage is a measure of acceptance, but usage alone is not a

sufficient indicator of success. Operationally, this study defines acceptance as an

observable decline in resource utilization. Therefore,

Hypothesis 1a. Resource utilization patterns will decline when physicians are

provided profiles and clinical pathways.

49
Hypothesis 1b. Resource utilization patterns will not decline when physicians

are not provided profiles and clinical pathways.

Relative Advantage

Most healthcare organizations have been using critical pathways for some time

in an attempt to standardize practice and improve clinical outcomes (Coffey, et al.,

1995). Proponents of guidelines and pathways argue that the use of these tools

contributes to enhanced outcomes. This section reports on the clinical outcomes for

the literature and research evaluating the effectiveness of using feedback techniques

(clinical audits and physician profiling), and information sharing (clinical guidelines

and pathways) that was examined in the previous section. As with resource

utilization, studies examining the change in clinical outcomes have illustrated mixed

results.

Profiling and Clinical Outcomes

Some studies reported a favorable change in outcomes. Bernard, et al. (1995)

examination of the use of a feedback system to direct and monitor physician and

hospital practice on general medicine services found that the intervention service

experienced significantly fewer preventable deaths (21% versus 3%, p=0.04). A major

weakness of the study was that it did not incorporate severity measures into the

analysis. Kramolowsky, et al. (1995) study of physician awareness of hospital costs

50
for radical retropubic prostatectomy demonstrated a significant decrease in the

complication rate.

In a few studies there was no change in outcomes. Balas, et al. (1996) meta-

analysis of 12 eligible randomized trials of profiling found there was no significant

improvement in clinical outcomes. The Pugh, et al. (1989) study of a controlled

trial to determine the effect of daily feedback about inpatient charges on physician

knowledge and behavior found no change in neither mortality nor readmission rates

within 30 days. This is not surprising, considering that the focus of the profile was

financial, not clinical. The study on the treatment of pneumonia patients in which

physicians were given clinical and financial information about their individual

practice patterns (Johnson et al., 1993) reported “no compromise” in outcomes as

measured by mortality, readmission rates, and infections or other complications.

Bernard, et al. (1995) also reported no differences in readmission, mortality rates, and

patient satisfaction.

Pathways and Clinical Outcomes

In Barnes, et al. (1994) review of implementing CABG pathways found no

change in outcomes; the mortality rate held constant at 2.7%. Conversely, at the

Medical Center Hospital of Vermont, the combination of pathways and algorithms for

CABG patients readmission and mortality rates decreased (Schriefer 1994). The chest

pain guideline study (Weingarten, et al. 1994) reported no significant difference in

the complication rate in the post-intervention period. Similarly, the pneumonia

51
guideline study at Bristol Regional Medical Center reported no change in the quality of

care, as measured by readmission rates (Clare et al., 1995). Again, none of these

studies included the pathway in the publication, so it is not possible to determine the

similarities or differences or to suggest any relationship between the design and the

effectiveness.

Relative Advantage

Rogers (1995) defines relative advantage refers to the degree to which an

innovation is perceived as better than the idea it supersedes. This advantage may be

measured in economic terms, social prestige, convenience, and satisfaction. This

principle of diffusion theory suggests that individuals are more likely to adopt a

modeled behavior if it results in outcomes they value. Thus, it seems reasonable to

assume that physicians will accept and implement clinical pathways that can

improve patient outcomes; therefore,

Hypothesis 2. There will be an improvement in clinical outcomes when

physicians are provided profiles (that include clinical outcomes measures) and

clinical pathways.

Time and the Rate of Adoption

The time dimension is involved in diffusion in three ways: the innovation-

decision process, innovativeness of the adopters, and the rate of adoption. Rogers

52
(1995) defines the rate of adoption as the relative speed with which the innovation is

adopted by the social system. The new idea or innovation typically moves slowly

through the social system when it is first introduced. Then, as the number of

adopters increases, the diffusion of the new idea moves at a faster rate. The rate of

adoption is usually measured as the number of members of the system that adopt the

innovation in a given time period.

Some innovations spread faster than others. The explanation for this

phenomenon lies in the complex interaction of characteristics of the idea itself and

the presence of various enabling factors in the environment. Identifying innovations

for testing through examination of the characteristics of the innovation itself,

coupled with the support and presence of various enablers, would create greater

opportunity for successful deployment and diffusion. Adoption is often the result of

increasing network pressures from peers, and intervention strategies that help

potential adopters overcome barriers, therefore it seems reasonable to expect

physician acceptance and use of the tools to increase over time.

Rogers (1995) defines observability as the visibility of the results; the easier

it is for individuals to see the results, the more likely they are to adopt it. Visibility

stimulates peer discussion of a new idea; i.e., friends of an adopter often request

information about it. Over time, profiles that benchmark performance can be

utilized to communicate relative advantages of clinical pathway adoption to

physicians. Further, since the innovation was disseminated several times in one

year, and the Best Practices program was a regular agenda item for several medical

53
staff department meetings, it seems likely that observability was favorably enhanced

overtime. Thus, it seems reasonable to suggest that, over time, a physician reluctant

to adopt the innovation may become more accepting if he sees that the data for his

peers has produced favorable results (e.g. a decline in length of stay and an

improvement in clinical outcomes); therefore,

Hypothesis 3. Resource utilization patterns will decline over time for

physicians who are provided profiles and clinical pathways.

Physician Leaders

This section examines key ideas from social identity and organizational

citizenship behavior theories to examine the performance of physician leaders in the

study.

Social Identity

Social identity theory (Tajfel and Turner, 1979) involves three central ideas

relevant to this study:

1. Categorization: The assignment of objects to better understand the social

environment. It facilitates permits the definition of appropriate behavior

by reference to the norms of group.

54
2. Social identification: An individual’s belief that that he belongs to a

defined group. Group membership is not abstract to the individual; it is

a real, true and vital part of the person.

3. Social comparison (Festinger 1954): A positive self-concept is a part of

normal psychological functioning. An extension of this concept is that

individuals evaluate themselves by comparing themselves to other group

members. Usually, people compare their group with other groups in

ways that reflect positively on themselves.

Organizational Citizenship Behavior

Organizations have been defined as systems of formal positions and roles

(Blau & Scott, 1962) in which participants conform to the expectations of their

positions. The term “organizational citizenship behavior” has been used to describe

organizationally beneficial behavior of workers that is not prescribed but occurs freely

to help others achieve the task at hand (Bateman & Organ, 1983). This willingness of

participants to exert effort beyond their formal obligations has been recognized as an

essential component of effective organizational performance. The practice of

medicine is a complex activity that requires professional judgments and cannot fully

be prescribed by clinical pathways. Thus, organizational citizenship behavior theory

can provide useful insights in understanding physician acceptance and use of clinical

pathways.

55
Generalized compliance is a basic dimension of organizational citizenship

behavior (Smith, Organ, and Near, 1983) relevant to this study. Generalized

compliance refers to the impersonal conscientiousness of doing things “right and

proper” for their own sake. In defining organizational citizenship behavior, Organ

(1988) highlights some specific categories of discretionary behavior and explains how

each helps to improve efficiency in the organization; two are relevant to this study:

1. Conscientiousness (e.g., efficient use of time and going beyond minimum

expectations) enhances the efficiency of both an individual and the group.

2. Civic Virtue (e.g., serving on committees and voluntarily attending

functions) promotes the interests of the organization.

Borman and Motowidlo (1993) have proposed that individuals contribute to

organizational effectiveness by doing things that are not necessarily their main task

functions but are important because they shape the organizational and social context

that supports task activities. In general, citizenship behaviors contribute to

organizational performance because these behaviors provide an effective means of

managing the interdependencies between members of a work unit and, as a result,

increase the collective outcomes achieved (Organ, 1988; 1990, 1997; Smith, Organ, &

Near, 1983). Organizational citizenship also reduces the need for an organization to

commit scarce resources to maintenance functions, thus freeing up more resources for

goal-related activities.

56
In the study hospital, many of the highest-volume admitters belong to the

same medical group. This medical group has also partnered with the hospital in a

managed care risk-sharing agreement to provide care to large population of

Medicare recipients. The group has a well-developed utilization management

system and for several years, the physician leaders of the group have worked closely

with hospital administration. Based on learning theory and innovation diffusion

theory, it seems reasonable to expect physicians that are more culturally integrated

in the organization to adopt clinical pathways and respond to profiles more rapidly

than those physicians who are not so ingrained; therefore:

Hypothesis 4. Resource utilization patterns for those physicians culturally

integrated in the organizational culture will decline more than for those

physicians who are less culturally integrated when both physicians are

provided profiles and clinical pathways.

Further, drawing from these perspectives, it seems reasonable to categorize the

physicians who agreed to and helped support the program as physician leaders. As

described earlier, these physicians were closely involved in developing and approving

the clinical pathways and physician profiles. Further, as leaders, they are more likely

to identify with the organization and its priorities. The theories suggest that the

physician leaders would feel some obligation to demonstrate good citizenship, and “set

57
the example” for their colleagues, by using profile information and following clinical

pathway recommendations. Therefore,

Hypothesis 5. Resource utilization patterns will be lower for physician leaders

than for physicians who are not leaders.

Complexity

Rogers (1995) defines complexity as the level of difficultly that may be

encountered when trying to understand and use the innovation. Rogers believes that

complexity influences adoption; innovations that are difficult to understand or use

will not be as readily accepted as those perceived as easy to understand or use.

Similarly, adoption of a difficult innovation may require the adopters to develop

new skills and understandings.

Rogers believes that ideas that can be tested on a small scale will generally

be adopted more quickly. He refers to this as trialability, or the degree to which an

innovation may be experimented with on a limited basis. An innovation that can be

tested represents less uncertainty to the individual who is considering it for

adoption, who can learn by doing. It could be argued that innovations that are less

complex are more likely to be tested.

As described in chapter two, the organization designed clinical pathways

that were less complex than those they reviewed from other organizations.

However, some pathways were more (or less) complex than others; therefore,

58
Hypothesis 6. The less complex the clinical pathway, the greater the chance

of accepting and using the pathway, and thus a greater change (reduction) in

resource utilization patterns.

Summary

The literature indicates that a primary reason for profiling and pathway

dissemination is to assist with continuous quality improvement efforts and reduce

costs associated with unnecessary practice variation. The review demonstrates that

there is varied evidence on the effectiveness of both tools and suggests the need for

additional study. Profiling and pathways will have limited utility if physician

behavior does not change as a result. Physicians wish to compare favorably with

their peers, thus showing them how they rank against their colleagues and providing

information about the best practice may to be an effective change strategy.

The successful adoption of innovations often depends on the network of

interpersonal relationships within a system or an organization. The ability within an

organization to share ideas, observe trials of new ideas, and be influenced by the

behavior and beliefs of trusted individuals all influence successful adoption and

diffusion. There are several dimensions and attributes that may have a great

influence on adoption: communication, relative advantage, time, social systems,

opinion leaders, and complexity.

59
Chapter IV

Methodology

Introduction

This chapter first reviews the methods used by other researchers to assess

the effectiveness of feedback and information sharing interventions. Then, it

summarizes the methods and procedures used to collect, tabulate, and analyze the

research data for this study.

Review of Previous Methods

This section summarizes the methods used in the studies that were reviewed

in the previous chapter. Much of the research in the affective domain used

controlled trials to determine efficacy. However, there is inconsistency in methods

to evaluate the use of profiles and pathways.

The Winickoff, et al. (1984) study investigating physician compliance with

colorectal cancer screening standards employed a pre and post T-test design to

evaluate the efficacy of educational meetings, and retrospective feedback of group

compliance rate. The intervention that used retrospective feedback of individual

compliance rate, and retrospective feedback of individual compliance rates

compared to peers was evaluated with the chi-square test to compare performance

between groups between periods. Similarly, the 1989 Pugh, et al., controlled trial

60
using daily feedback about inpatient charges employed T-tests to test for significant

change in physician behavior.

Marton, Tul and Sox (1985) study on information-sharing as a tool for

modifying physician test-ordering practices utilized two-way analysis of variance

tests to compare group means, and comparisons using both the Kruskal-Wallis test

on ranks and Student T-test were also used. Paired comparisons were also made for

each group before and after the intervention.

Berwick and Coltin (1986) study of physician feedback in a health

maintenance organization employed a crossover design controlled clinical trial.

Three interventions were studied on the use of thirteen common blood tests among

thirty-five internists within three ambulatory care centers. The blood tests were

divided into three groups that were balanced for type of test and utilization rates.

Three interventions were developed for use in a modified Greco-Latin square design

with crossover of interventions, test groups, and ambulatory care center. In the

Test-Specific Education intervention, two consecutive weekly departmental

meetings were devoted to the discussion of appropriate use of tests in each of the

three blood test groups. In the Peer Comparison Feedback on Cost of Test Use,

intervention physicians received individual reports comparing their specific

utilization rates against their colleagues for each of the tests within a particular test

group. In the Peer Comparison Feedback on Yield on Test intervention, individual

physicians received reports that ranked their abnormal test result rates for each test

within the particular test group. In the crossover design, each test group was

61
subjected to each of the three interventions in a different center. Rates of test use,

as measured by tests per 1000 encounters, and variation, as measured by coefficient

of variation, among physicians within centers were measured during baseline and

intervention periods. The effects of the intervention were measured by studying

rate changes during intervention periods, compared with preceding nonintervention

periods. Intervention effects on the change were analyzed using analysis of

variance and Kruskal-Wallis techniques.

Tierney, Miller and McDonald (1990) studied the effect of informing

physicians of the charges for outpatient diagnostics in a primary medical care

practice. All 121 physicians in the study ordered tests from computer workstations.

For the intervention group (half of the physicians), charges for the test being

ordered and total charges for tests for that patient were displayed on the computer.

The control group did not receive messages about charges. A questionnaire to

determine the physicians’ knowledge of test charges was administered once prior to

the intervention and six months after the intervention. For each physician, the mean

charges of tests ordered and the mean charges for tests per patient visit were

calculated for each study period. When comparing the mean values for the

intervention and control groups in the pre-intervention period, a weighted analysis

of variance was used; for comparisons within the intervention period, a weighted

analysis of covariance, with each physician’s pre-intervention mean entered as a

covariate was used. To determine the accuracy of the physician’s estimate of the

test charges, the absolute value of the percent deviation of each physician’s estimate

62
for each test was calculated. This score was used to compare the knowledge of test

charges in the intervention and control groups at baseline, and the degree of

improvement after the intervention.

The experimental-control group study (Johnson et al., 1993) examining the

effect of a physician education program on hospital length of stay and total patient

charges consisted of a one-time exposure of physicians to clinical and financial

information about their individual practice patterns for the treatment of pneumonia

patients. Analysis of variance and T-tests were used to compare the intervention

and control groups and to test for significant differences.

Weingarten, et al. (1994) evaluated the effects of providing physicians a

practice guideline recommending consideration of early hospital discharge for low-

risk patients with chest pain. During six intervention periods, physicians received a

structured message posted on patients' charts the day after admission that conveyed

risk information and the guideline recommendation. Because patients receive

usually care from many physicians, this study did not use individual physicians as

the unit of analysis, but rather the aggregate practice for this specific diagnosis.

Complication rates were compared using a chi-square test or Fisher exact test.

Continuous data for the study groups were compared using the Student T-test, the

Wilcoxon rank-sum test, or both when the data were notably distributed in a non-

normal pattern. An adjusted analysis comparing the two study groups with respect

to total costs and length of stay was done using a stepwise regression procedure.

63
In the Johnson and Martin (1996) study, orthopedic surgeons were presented

with verbal and written physician-specific materials over a seven-week period. X-

bar and R charts (control charts) were constructed to monitor effects of the

educational program on overall resource utilization, as measured by length of stay

and average total charges. These charts were shared with each surgeon in the

intervention group along with data profiling their specific practice and that of their

peers. Two sample T-tests were used to test for statistical differences in mean

length of stay and total charges. The studies by Johnson and Martin (1996),

Weingarten, et al. (1994), and Johnson et al., (1993), and relate most closely to this

empirical study.

Statistical Design

This study uses a quasi-experimental design since physicians were not

randomly assigned to either the experimental or control groups. To determine if

there were any changes in practice patterns attributable to the profile intervention,

the data on select profile measures were subjected to statistical analysis. For

purposes of the study, improvement was defined as:

1. A decrease in the average length of stay from 1995 to 1996,

2. A decrease in the average total charge from 1995 to 1996 (note: there

was no change is the hospital’s pricing structure between 1994-1996;

thus, adjusting for such changes was not required),

3. Decrease in the readmission rate, and

64
4. Decrease in the complications rate.

Unit of Analysis

To provide for a robust analysis of the data, there are several levels of

analysis: at the physician level, the DRG level, and over time. Similar to the

Weingarten (1994) study, DRGs are used as one unit of analysis since the profile

reports for each physician were APRDRG-specific, and every physician providing

care to a patient within the target APRDRGs received a report – even if it was only

one patient. This was a conscious decision, given the difficulties of exclusion. For

example, if one physician received the intervention on a specific APRDRG (e.g.,

Pneumonia), and another did not, it would be difficult to prevent the sharing of the

profile or guideline between physicians. Controlling for spillover was problematic.

To best limit spillover, the study would need to be conducted at two different sites

where communication does not occur on a routine basis, or perhaps segment the

physicians into groups but do not routinely communicate in a professional context.

Given the small size of the organization, and the fact that many of the high-volume

physicians belonged to the same medical group, these options were not feasible.

Data Elements

Patient-level data elements used in this study are:

1. Record Number: Unique number used to identify each inpatient admission.

65
2. Primary Physician Identification Number (MDID): The identification

number for the physician primarily responsible for the majority of the

patient’s care (usually the attending physician or primary surgeon).

3. All-Payer Refined Diagnosis Related Group (APRDRG): Diagnostic

category (illness or operative procedure) for which the patient is being

treated.

4. Experimental DRG (EXP): A value of one is assigned when the patient is in

an APRDRG that is in the experimental group.

5. Control DRG (CON): A value of one is assigned when the patient is in an

APRDRG that is in the experimental group.

6. Complexity of Illness (COI): Severity of illness scale discussed previously.

7. Complexity of Pathway (PWComp): A measure of the complexity of the

pathway. The number, ranging from 20 to 50, represent the number of

critical elements (treatments, medications, diagnostic tests) specified by the

pathway.

8. Length of Stay (LOS): Number of days the patient was in the hospital.

9. Total Charges (TOTCHG): Charge data, obtained from the hospital's

financial database, were based on actual prices charged for services rendered

during the hospital stay. Charges for all services were aggregated into total

charges.

66
10. Readmission (READM): A discrete variable, where a value of 1 is assigned

if the patient is readmitted to the hospital for the same diagnosis within

thirty days of the discharge date.

11. Complication (COMP): A discrete variable, where a value of 1 is assigned if

the patient’s medical record data identifies a complication that occurred

while the patient was in the hospital.

12. Physician Leader (MDLEAD): A value of 1 is assigned if the physician

primarily responsible for the majority of the patient’s care: a) held

leadership roles (elected medical committee member, elected department

chair, appointed medical directors) during the study, b) actively participated

in the profile or pathway development process or c) were identified as

influential by the organization. There were 16 physicians identified as

leaders.

13. IPA (IPA): A value of 1 is assigned if the physician was a member of the

IPA medical group closely affiliated with the hospital.

Assumptions

The basic assumptions regarding this study are:

1. Hospital billing data (patient charges) are a reliable source of

information to estimate physicians’ resource utilization of services and

to monitor practice patterns.

67
2. Physicians understand the use of patient charges, which is the statistic

used to measure their utilization of services on the profile.

3. Physicians understand the use of clinical pathways.

4. Physicians understand the use of severity adjustments, which were

applied to their profiles to adjust for the severity of illness relative to

their caseload.

Data Preparation

The APRDRG data were examined at three levels of aggregation:

1. Ten targeted APRDRGs where physicians received a profile and

pathway (experimental group),

2. Ten targeted APRDRGs where physicians received a profile and

pathway (experimental group), and were considered to be leaders.

3. Ten non-targeted targeted APRDRGs for which no physician received a

profile or pathways (control group).

For each group, a pre-intervention measure (1995) and a post-intervention

measure (1996) were taken. For each measurement period, and each group as a whole,

utilization and outcomes rates were calculated for the four measures described earlier.

The aggregate data was analyzed with SPSS.

68
Data Analysis

Hypothesis 1a

Resource utilization patterns, as measured by length of stay and total charges,

for those APRDRGs where physicians receive profiles and clinical pathways will

decline following the intervention. One-way analysis of variance (ANOVA) was

performed to determine any significant differences for mean length of stay and mean

total charge.

Hypothesis 1b

Resource utilization patterns, as measured by length of stay and total charges,

for those APRDRGs where physicians do not receive profiles or clinical pathways will

not significantly decline following the intervention. One-way analysis of variance was

used to determine any significant differences for mean length of stay and mean total

charge.

Hypothesis 2

There will be a significant improvement in outcomes, as measured by

readmission and complication rates, for the intervention APRDRGs when

comparing the baseline period (1995) to the post-intervention period (1996). The

chi-square test was performed to determine any significant differences between

APRDRG-COI for readmission and complication rates.

69
Hypothesis 3

In terms of rate of adoption, Rogers theorizes that innovation goes through a

period of slow, gradual growth before experiencing a period of relatively dramatic

and rapid growth. Since none of the existing studies have reviewed the effect of

profiles or pathways over time, this study explores this issue. The project directors

anticipated that resource utilization would decline over time. Thus, given the one-

year time frame of this study, it seems reasonable to expect the measures to

consistently decline in each of the four quarters of the study year. The Student T-

test was employed to determine any significant differences between variations for

mean length of stay and mean total charge when comparing each quarter’s average

against the baseline for both the experimental and control groups.

Hypothesis 4

Resource utilization, as measured by length of stay and total charges, for

those APRDRGs for “more culturally integrated” physicians who are exposed to the

innovation will decline significantly more than for those “less culturally integrated”

physicians who are exposed to the innovation. It seems reasonable to propose that

those physicians who spend more time in the hospital are more likely to be more

culturally integrated. The number of discharges was used as a proxy for time in the

hospital; i.e., physicians with a greater caseload spend more time in the hospital.

Multiple regression analysis was employed to test this proposition, and also used to

test hypotheses 5 and 6. The models are discussed later in this chapter.

70
Hypothesis 5

Following the intervention, resource utilization patterns, as measured by length

of stay and total charge will be significantly lower for physician leaders who receive

profiles and clinical pathways versus non- leader physicians who also receive profiles

and clinical pathways. The Student T-test was employed to determine any significant

differences between APRDRG-COI variations for mean length of stay and mean total

charge. The multiple regression analysis model described below was also used to

examine this proposition.

Hypothesis 6

Rogers believes that complexity influences adoption; innovations that are

difficult to understand or use will not be as readily accepted as those perceived as easy

to understand or use. As described in chapter two, the organization designed clinical

pathways that were less complex than those they reviewed from other organizations.

However, it is likely that some of the pathways were perceived as more (or less)

complex than others. As described in chapter two, the pathways developed in the

study organization were presented in a grid format, with the columns representing the

day of treatment, and the rows containing the critical aspect of care. An “X” was

placed at the intersection, denoting the day that the care should occur. It seems

reasonable to propose that the number of critical elements on the grid (which ranged

from 20 to 50) can be used as a proxy for the complexity of the pathway.

71
Multiple Regression Model

The models propose that several characteristics are determinants of average

length of stay and average total charge. These determinants are: complexity of

illness, average complexity of the pathway, status as a leader, membership in the

medical group, and number of patients.

It is expected that complexity of illness and complexity of the pathway will

be positive values; that is, as these increase, so does the average length of stay or

average total charge. Conversely, status a leader, membership in the medical group,

and number of patients are expected to be negative values.

Model 1: AVGLOS = Fx (Avg COI, PWComp, MDLEAD, IPA, NPat)

Model 2: AVGTOTCHG = Fx (Avg COI, PWComp, MDLEAD, IPA, NPat)

Two similar models are examined for the control group. Since there is no

intervention, pathway complexity was removed from the models. These models

will enhance the analysis of complexity of illness and pathway complexity.

Model 3: AVGLOS = Fx (Avg COI, MDLEAD, IPA, NPat)

Model 4: AVGTOTCHG = Fx (Avg COI, MDLEAD, IPA, NPat)

Where Avg COI, PWComp, NPt, are continuous variables; MDLEAD and IPA are

discrete variables.

72
Summary

This chapter presented the research methodology and procedures, including

issues related to the study population, design, instrumentation, and statistical analysis.

A quasi-experimental method applying Student T-test, chi-square test, analysis of

variance, and multiple regression analysis was used to determine the impact of

profiling and clinical pathways in modifying physician resource utilization within the

hospital setting. The next chapter contains the analyses and findings.

73
Chapter V

Findings

Descriptive Analysis

Statistical analysis was performed using SPSS, version 8.0. Table 5-1

provides descriptive statistics in terms of volume, standard deviation, and the

change in average length of stay and average total charge between the pre and post-

intervention periods. Overall, there was a reduction in the mean length of stay in

the experimental group from 4.43 days to 3.69 days. Changes in mean total charge

per case were pronounced; overall, there was a reduction in the mean from $10,911

to $9,215.

Table 5-1. Experimental APRDRGs, Pre and post intervention, Length of Stay &
Total Charges
Pre-Intervention (1995) Post-Intervention(1996)
Change
N=3,944 N=3,178
Std Std Std
Mean Min Max Dev Mean Min Max Dev Mean Dev Mean %
LOS 4.43 1 48 2.73 3.69 1 22 2.28 -0.74 -1.68 -16.60%
Charge 15,739 2,557 143,815 7,909 13,274 1,010 63,123 5,850 -2,466 -5,686 -15.70%

As hypothesized and illustrated in table 5-2, the change in the control group

was much less pronounced. In fact, the overall mean increased for both average length

of stay and average total charge.

74
Table 5-2. Control APRDRGs, Pre and post intervention, Length of Stay and Total
Charge
Pre-Intervention (1995) Post-Intervention(1996) Change
N=1,337 N=1,018

Std Std Std


Mean Min Max Dev Mean Min Max Dev Mean Dev Mean %
LOS 3.46 1 18 2.1 3.64 1 18 2.03 0.19 -0.07 5.49%
Charge 12,563 1,075 105,161 12,584 13 1,541 58,672 5,894 21 -1,316 0.17%

Table 5-3 summarizes the number of readmissions and complications for both

the control group and the experimental group for the pre-intervention period (1995) as

well as the post-intervention period (1996). The data demonstrate a reduction in the

absolute numbers for each category. This finding is explored further later in this

chapter.

Table 5-3. Readmissions and Complications, Experimental and Control Groups


Readmissions Complications
YEAR Control Experimental Control Experimental
1995 21 48 9 36
1996 19 30 6 25
Change -2 -18 -3 -11

In table 5-4, the analysis of the control group is further refined to examine

the descriptive statistics for those APRDRGs where the physicians received any

report for an APRDRG in the experimental group. The data were divided into two

groups: those who received the treatment and those who did not. Length of stay

increased from 1995 to 1996 for both groups. Average charge also increased for the

group that received the treatment. Interestingly, the average charge for the group

that received no reports declined, albeit very slight (by $264, from $17,091 to

75
$16,827, or 1.5%). At the aggregate level these results suggest that spill-over or

contamination effects were minimized.

Table 5-4. Control Group examined for Spill-Over Effect


Received Report for
Exp DRG No Report TOTAL
1995 1996 1995 1996 1995 1996
Cases 1,089 854 248 164 1,337 1,018
MDs 161 148 85 65 246 213
Avg LOS 3.28 3.48 3.81 3.97 3.38 3.56
Std Dev 2.36 2.35 2.93 3.00 2.48 2.47
Avg Charge 10,764 11,185 17,091 16,827 11,937 12,094
Std Dev 7,848 7,024 13,537 11,275 9,491 8,128

Correlation values between the variables are presented in table 5-5. Six

values were of significant magnitude (above 0.35). The correlation between

complexity of illness and length of stay was 0.3979. Similarly, the correlation

between complexity of illness and total charge was 0.4001. These relationships are

expected, since patients with more severe illness tend to spend more time in the

hospital and receive more care. The correlation between length of stay and total

charges was 0.825. Such a relationship is expected, since as length of stay

increases, so does the total charge. Two variables were strongly correlated with

pathway complexity: length of stay (0.4870) and total charge (0.6091). Here too,

such a relationship was predictable. Lastly, the correlation of 0.3811 between IPA

Member and MD Leader was not surprising, given the strong relationship between

the hospital and the IPA.

76
Table 5-5. Correlation Table

Illness Re- Compli- Length Total MD IPA Pathway


Complex admission cation of Stay Charge Leader Member Complex
Illness
Complexity 1.0000
Readmission 0.0472 1.0000
Complication 0.0605 -0.0101 1.0000

Length of Stay 0.3979 0.0312 0.0478 1.0000


Total Charge 0.4001 0.0327 0.1127 0.8254 1.0000
MD Leader -0.0195 -0.0165 0.0371 -0.0178 0.0322 1.0000
IPA Member -0.1097 -0.0429 0.0095 -0.1180 -0.1326 0.3811 1.0000
Pathway
Complexity 0.3344 0.1065 0.0518 0.4870 0.6091 0.0177 -0.3003 1.0000

Multiple Regression Analysis

For convenience, the overall results of the multiple are presented first, and

then further discussed under the relevant hypotheses. Diagnostics were performed

on the statistical output, and are summarized below.

a) Absence of Heteroskedasticity was satisfied; the plot of residuals revealed

two tight groups with many observations falling within +/- 5, with some

outliers.

b) Normal Distribution of Error Term appeared to be satisfactory. The actual

vs. expected results in the normal probability plot were close together.

However, a stem-leaf plot indicated some skewing toward the right.

c) Absence of Outliers appeared to be satisfied; there were three outliers.

d) Absence of Multicollinearity appeared to be satisfactory; the standard errors

are not substantial.

77
e) Linearity was not satisfied; there was an expected relationship between

complexity of illness and length of stay.

As table 5-6 illustrates, the adjusted R-squared value reveals that slightly

more than 32% of the variation in average length of stay is explained by average

complexity of illness, average complexity of pathway, status as a leader,

membership in the medical group, and number of patients in the experimental

group. The model suggests that only complexity of illness and complexity of

pathway are significant determinants (.05 level). It was interesting to note that

status as a physician leader increases length of stay, while membership in the IPA

decreases length of stay.

Table 5-6. Average Length of Stay as Dependent Variable, Experimental Group


Regression Statistics
Multiple R 0.5690
R Square 0.3238
Adjusted R Square 0.3103
Standard Error 1.6401
Observations 256

Signif
ANOVA df SS MS F F
Regression 5 321.9877 64.3975 23.9412 0.0000
Residual 250 672.4546 2.6898
Total 255 994.4423

Coefficients Standard Error t Stat P-value


Intercept -1.5232 0.5365 -2.8390 0.0049
Number of Discharges -0.0011 0.0054 -0.2012 0.8407
Average Complexity
of Illness 1.0499 0.2403 4.3698 0.0000
Average Complexity
of Pathway 0.0798 0.0150 5.3279 0.0000
IPA Membership -0.2962 0.4711 -0.6289 0.5300
Leader Status 0.2826 0.2383 1.1858 0.2368

78
Similarly, table 5-7 shows that the adjusted R-Square value reveals that

slightly more than 31% of the variation in average total charge is explained by

average complexity of illness, average complexity of pathway, status as a leader,

membership in the medical group, and number of patients in the experimental

group. Again, this model suggests that only complexity of illness and complexity of

pathway are significant determinants (.05 level). In this model, physician leader

status decreases average total charge, while membership in the IPA increases the

amount.

Table 5-7. Average Total Charge as Dependent Variable, Experimental Group


Regression Statistics
Multiple R 0.5599
R Square 0.3135
Adjusted R Square 0.2998
Standard Error 5419.9098
Observations 256

Signif
ANOVA df SS MS F F
Regression 5 3353372429 670674486 22.8311 0.0000
Residual 250 7343855600 2937522
Total 255 10697228029

Coefficients Standard Error t Stat P-value


Intercept -3845.5578 1773.0246 -2.1689 0.0310
Number of Discharges -6.2052 17.9456 -0.3458 0.7298
Average Complexity
of Illness 2696.8600 794.0211 3.3965 0.0008
Average Complexity
of Pathway 297.2891 49.4756 6.0088 0.0000
IPA Membership 797.1239 1556.7021 0.5121 0.6091
Leader Status -858.7364 787.4801 -1.0905 0.2765

79
The results of the regression analysis for length of stay in the control group

are presented in table 5-8. It is important to note that the adjusted R-Square value

for both the control and experimental groups is similar: 0.3086 and 0.3103,

respectively. The only variable of note in this regression was complexity of illness.

It is interesting to compare the difference in the coefficient for COI between the two

groups: 1.7138 in the control group, vs. 1.0499 for the experimental group.

Table 5-8. Average Length of Stay as Dependent Variable, Control Group


Regression Statistics
Multiple R 0.5671
R Square 0.3216
Adjusted R Square 0.3086
Standard Error 1.5095
Observations 213

ANOVA
Signifi-
df SS MS F cance F
Regression 4 224.6994 56.1749 24.6539 0.0000
Residual 208 473.9352 2.2785
Total 212 698.6347

Coefficients Standard Error t Stat P-value


Intercept 0.0769 0.3443 0.2233 0.8235
Number of Discharges 0.0244 0.0246 0.9929 0.3219
Average Complexity
of Illness 1.7138 0.1817 9.4311 0.0000
IPA Membership -0.3652 0.4240 -0.8615 0.3900
Leader Status -0.3060 0.6234 -0.4908 0.6241

The results of the regression analysis for average total charge in the control

group are presented in table 5-9. It is important to note that the adjusted R-Square

value for both the control and experimental groups is similar: 0.2809 and 0.2998,

respectively. Again, the only significant variable was complexity of illness. It is

80
interesting to compare the difference in the coefficient for COI between the two

groups: 2,696 in the control group, vs. 5,246 for the experimental group.

Table 5-9. Average Total Charge as Dependent Variable, Control Group


Regression Statistics
Multiple R 0.5427
R Square 0.2945
Adjusted R Square 0.2809
Standard Error 4987.5533
Observations 213

ANOVA
Signif-
df SS MS F icance F
Regression 4 2159853967 539963492 21.7065 0.0000
Residual 208 5174143088 24875688
Total 212 7333997055

Coefficients Standard Error t Stat P-value


Intercept 1173.0059 1137.5997 1.0311 0.3037
Number of Discharges 100.5218 81.2298 1.2375 0.2173
Average Complexity
of Illness 5246.8479 600.4229 8.7386 0.0000
IPA Membership -1102.8282 1400.8476 -0.7873 0.4320
Leader Status -1360.5677 2059.8489 -0.6605 0.5097

Hypothesis 1a
To determine the statistical differences among the experimental APRDRGs

for length of stay and total charge, the means between the pre and post intervention

periods were compared using one-way analysis of variance (ANOVA). Prior to

conducting the analysis, box plots were generated to determine if there was

anything unusual about the distribution; this revealed some outliers and a few

extreme cases. Assumptions for one-way ANOVA were upheld. ANOVA

procedures are reasonably robust to departures from normality, and data

81
transformations were not necessary. The significance level is based on actual F

values and degrees of freedom. The results of table 5-10 suggest support for the

hypothesis given the statistically significant decrease in length of stay and charges.

Table 5-10. Experimental APRDRGs, Pre and post intervention, ANOVA Tests
Mean Length of Stay Mean Total Charge
Pre Post F-Value P-Value Pre Post F-Value P-Value
(1995) (1996) (1995) (1996)
3.40 2.93 2.074 0.040* 10,911 9,215 1.870 0.035*
*Statistically significant (p<=.05)

Hypothesis 1b

To determine the statistical differences among the control APRDRGs for

length of stay and total charge with each APRDRG-COI, a comparison between the

pre and post intervention periods using one-way analysis of variance (ANOVA) was

conducted. As shown in table 5-11, the hypothesis was supported; there was not a

significant decline in the APRDRGs where physicians did not receive profiles or

pathways.

Table 5-11. Control APRDRGs, Pre and post intervention, ANOVA Tests
Mean Length of Stay Mean Total Charge
Pre Post F-Value P-Value Pre Post F-Value P-Value
(1995) (1996) (1995) (1996)
3.38 3.56 0.410 0.681 11,937 12,094 -1.541 0.126

Hypothesis 2

In the experimental group, there were slight decreases in both the

complication and readmission rates. To determine if these changes were significant,

82
analysis of variance and Kruskal-Wallis techniques were used. None of the changes

were statistically significant, as illustrated in table 5-12.

Table 5-12. Experimental APRDRGs, Chi-Square Test, Complications and


Readmissions
Pre Post Change ChiSq P Value
(1995) (1996)
Readmission 0.0122 0.0091 -0.0030 0.097 0.756
Complication 0.0157 0.0067 -0.0090 0.382 0.536

In the control group, there was a slight decrease in the readmission rates and a

slight increase in the complication rate. Analysis of variance and Kruskal-Wallis

techniques found that these changes were not statistically significant; see table 5-13:

Table 5-13. Control APRDRGs, Chi-Square Test, Complications and Readmissions


Pre Post Change ChiSq P Value
(1995) (1996)
Readmission 0.0094 0.0187 0.0093 0.167 0.683
Complication 0.0079 0.0059 -0.0020 0.027 0.870

Thus, no evidence was provided for the hypothesis. That is to say, there was

no support for the contention that the use of clinical pathways and physician profiles

leads to significant improvement in outcomes.

Hypothesis 3

To explore if there changes over time, average length of stay and total charge

for the four calendar quarters of 1996 for both the experimental and the control groups

were compared against the baseline. The Student T-test was employed to examine

83
the change. As tables 5-14 and 5-15 illustrate, there were significant decreases for

the experimental group during the last two quarters of 1996, thus offering limited

support for the hypothesis that innovation goes through a periods of gradual, then

relatively dramatic and rapid growth periods.

Table 5-14. Experimental Group, Average Length of Stay by Quarter


N ALOS Change + T-Statistic p-value
1996 Q1 784 3.56 0.16 0.77 0.44
1996 Q2 788 2.93 -0.47 2.01 0.09
1996 Q3 800 2.67 -0.73 2.34 0.031*
1996 Q4 806 2.59 -0.80 2.36 0.029*
+ Versus baseline (1995) average of 3.40 * Statistically significant (p<.05)

Table 5-15. Experimental Group, Average Total Charge by Quarter


Average
N Charge Change + T-Statistic p-value
1996 Q1 784 10,312 -599 0.81 0.43
1996 Q2 788 9,147 -1,764 2.12 0.06
1996 Q3 800 8,910 -2,001 2.32 0.035*
1996 Q4 806 8,516 -2,394 2.69 0.027*
+ Versus baseline (1995) average of $10,991 * Statistically significant (p<.05)

Figure 5-1 graphically illustrates the change in average length of stay and
average total charge over time.

Figure 5-1. Experimental Group, Average Length of Stay and Charge by Quarter.
Length of Stay Average Charge
4.0
12,000
3.5
10,000
3.0

2.5 8,000

2.0 6,000
1.5
4,000
1.0
2,000
0.5

0.0 0
1996 Q1 1996 Q2 1996 Q3 1996 Q4 1996 Q1 1996 Q2 1996 Q3 1996 Q4

84
Further, as tables 5-16 and 5-17 illustrate, resource utilization in the control

group did not significantly change.

Table 5-16. Control Group, Average Length of Stay by Quarter


N ALOS Change + T-Statistic p-value
1996 Q1 257 3.74 0.36 1.75 0.09
1996 Q2 250 3.47 0.09 0.76 0.47
1996 Q3 252 3.32 -0.06 0.71 0.49
1996 Q4 259 3.71 0.33 2.03 0.08
+ Versus baseline (1995) average of 3.38

Table 5-17. Control Group, Average Total Charge by Quarter


Average
N Charge Change + T-Statistic p-value
1996 Q1 257 $12,558 $621 1.57 0.09
1996 Q2 250 11,897 -39 0.67 0.46
1996 Q3 252 11,652 -285 0.72 0.48
1996 Q4 259 12,253 316 2.01 0.08
+ Versus baseline (1995) average of $11,937

Hypothesis 4

As previously illustrated in the multiple regression analysis output in tables

5-4 and 5-5, there was no support for the hypothesis that utilization for those

physicians more culturally integrated in organization will decline more than for those

less culturally integrated.

Hypothesis 5

To determine if there was a statistical difference between physician leaders and

non-leaders, the Student T-test was used to analyze differences in length of stay and

total charge. As tables 5-18 and 5-19 show, this hypothesis was not supported.

85
Further, the results of the multiple regression analysis did not extend support for this

proposition.

Table 5-18. Average Length of Stay, Leaders v. Non-Leaders, 1995 v. 1996

Non Leader Leader Difference P-Value


1995 3.40 3.37 -0.02 0.89
1996 3.14 2.90 -0.24 0.12
Change -0.26 -0.47
P-Value 0.16 0.08

Table 5-19. Average Total Charge, Leaders v. Non-Leaders, 1995 v. 1996

Non Leader Leader Difference P-Value


1995 10,987 11,865 878 0.48
1996 9,798 10,329 532 0.45
Change -1,189 -1,535
P-Value 0.06 0.07

Hypothesis 6

The multiple regression analysis models (see tables 5-4 through 5-7 above)

suggest that complexity of pathway is a significant determinant for both length of

stay and total charges. Thus offering support for the contention that the less

complex the clinical pathway, the greater the chance of accepting and using the

pathway, and thus a greater change (reduction) in resource utilization patterns.

86
Summary

The findings of the data analysis are summarized in table 5-20.

Table 5-20. Summary of Hypotheses and Findings


Proposition Finding
1a Resource utilization patterns will decline when physicians Supported
are provided profiles and clinical pathways.
1b Resource utilization patterns will not decline when Supported
physicians are not provided profiles and clinical pathways.
2 There will be a significant improvement in outcomes Not Supported
between the pre and post intervention periods.
3 Resource utilization patterns will decline over time (each Supported
quarter).
4 Resource utilization for physicians culturally integrated in Not Supported
the organizational culture will decline more than for those
physicians less culturally integrated.
5 Resource utilization patterns will be lower for physician Not Supported
leaders than for physicians who are not leaders.
6 The less complex the clinical pathway, the greater the Supported
reduction in resource utilization patterns.

87
Chapter VI

Discussion

Several researchers have suggested combining various approaches and

multiple tools to change physician behavior. However, they have not found

evidence identifying the best mix of complementary interventions. None of the

existing research has examined the impact of a comprehensive program consisting of

profiling, benchmarking, and clinical pathways. This study attempts to bridge this gap

in the current published research.

General Limitations

A case-study approach is utilized, thus generalization and the ability to

replicate this study may be severely limited, given the distinct characteristics of any

organization. In particular:

1. The study organization had a long history of physician involvement in

quality improvement activities.

2. Many physicians had been sensitized to the importance of cost

containment given the organization’s early participation in the Medicare

risk-sharing program.

3. The organization’s shaky financial status may have impacted the

physicians’ adoption of the innovation.

88
There are several other limitations of this study that must be highlighted:

1. Physicians in the study may not have been exposed to the same patients

for the entire study period, since patients may change physicians at any

time.

2. Physicians take call for their partners, thus increasing the possibility of

study contamination.

3. Physicians may not have reviewed either their profiles or the pathways.

4. The intervention group was limited to ten diagnoses; a different number

or variety may have changed the results.

Use of Profiles and Pathways

Changing inappropriate utilization patterns has been touted as one way to

control costs and improve the quality of healthcare. It seems that the dissemination

of a diagnosis-specific physician profile in concert with a diagnosis-specific clinical

pathway was found to be effective in reducing resource utilization, as measured by

length of stay and total charges. However, this study did not measure whether or not

pathways were actually used by physicians to assist their decision-making.

This study also suggests that resource utilization patterns, as measured by

length of stay and total charges, may not decline for those diagnostic groups where

physicians do not receive profiles or pathways. Previous studies that randomly

assigned physicians to participate in similar programs were unable to control for

spillover or unable to adequately explain the effects. This study attempted to

89
control for and examine spillover in two different ways: by examining the results

between the control and experimental APRDRG groups and within the control

group by comparing those physicians who received information on the patients in

the experimental group against those who did not receive information for patients in

the control group. By using APRDRGs that differed in clinical nature for both the

control and intervention groups, controlling for spillover was not necessary. Thus,

the organization did not need to deal with the thorny issue of designing methods to

prevent physicians from discussing the intervention with their control-group

colleagues and avoided the ethical dilemma of not sharing clinical pathways based

on best practice. However, this cannot be definitively assessed; since the control

group consisted of only ten APRDRGs. It is possible that other APRDRGs did

experience spillover.

While the results demonstrated that the use of profiles and pathways did not

positively impact outcomes, it is important to note that that efficiency was improved

without degrading clinical results. It is possible that better aggregate care may

result with prolonged measurement and monitoring processes that facilitate changes

in care design and delivery.

Lastly, this study does not test for differences between use of the tools and

guidelines since both the pathway and profile were tested together. Future research

may take a multiple methods approach comparing one intervention with two

interventions, with three interventions, etc. Overall, this study suggests that the

combined dissemination of both physician profiles and clinical pathways may be a

90
sufficient diffusion method to communicate the need for physicians to change their

resource utilization behaviors.

Rate of Adoption

The results of this study suggest that there is a rate of adoption for profiles and

clinical pathways, supporting Rogers (1995) belief that the innovation typically

moves slowly through the social system when it is first introduced. It is important

to note that four quarters may not provide a sufficient data to conclude that there is a

rate of adoption. However, the data demonstrated that there was not significant

change until the third and forth quarters in 1996 – suggesting that physicians may

have taken a “wait and see” approach.

Cultural Integration and Physician Leadership

Extension of various theories suggested that physicians more culturally

integrated in the organizational culture to adopt clinical pathways and respond to

profiles more rapidly than those physicians less ingrained in the culture. However,

this hypothesis was not supported by the data. Perhaps volume was not a valid

proxy to measure organizational entrenchment.

The literature suggests that physician leaders are critical in ensuring the

success of both profiling and pathway (or guideline) development and dissemination

programs. An extension of diffusion theory suggests that physician participation in

the development of such programs impacts the effectiveness of the intervention, and

91
recommends including medical directors and respected members of the medical

staff in the development of both profile metrics and the clinical pathway. Lastly, an

extension of organizational citizenship behavior theory might suggest that the

physician leaders would feel some obligation to demonstrate good citizenship, and

would “set the example” by using profile information and following clinical pathway

recommendations.

While it is difficult to accurately access the impact that such medical staff

leaders play, this study hypothesized that these leaders would have lower resource

consumption patterns than their non-leader colleagues when both groups receive

profiles and pathways. However, this hypothesis was not supported.

Although not statistically significant, it was interesting to note that physician

leader status decreased average total charge, while membership in the IPA increased

the amount. Conversely, physician leader status increased length of stay, while

membership in the IPA decreased length of stay.

It is possible the physician leaders were practicing medicine in a manner

suggested by the profiles before the dissemination of the profile. That is, that they

were familiar with and following current evidenced-based guidelines that were used

to develop the pathways. Another plausible explanation is that the pathway was

developed based on their practice; i.e., they were the best practice physicians. At

the other extreme, it is possible that some physician leaders opted not to follow the

guideline.

92
A weakness of this study is that it did not directly assess the leaders’ impact,

instead it attempted to measure the change in their behavior. However, the

supposition that their support is critical makes sense intuitively, and it is not likely

that the medical staff would have accepted the intervention without the leaders’

support.

Complexity

Rogers (1995) believes that complexity influences adoption; innovations that

are difficult to understand or use will not be as readily accepted as those perceived

as easy to understand or use. Although the measure of pathway complexity was

simplistic, the data offered support for the hypothesis that the less complex the

clinical pathway, the greater the chance of accepting and using the pathway.

Future Research

This dissertation has identified several questions and issues to be considered in

future research on physician profiling. In summary, future studies might include:

1. Whether pathways developed in one organization can be directly applied in

another.

2. Whether other quality models (e.g., Juran, Crosby, or hybrids) are more

effective in changing physician behavior

3. Examination of both the format and content of profiles to determine how

different metrics and feedback methods impact physician behavior.

93
4. Examination of the characteristics of guideline adoption (i.e., characteristics

of the health-care professional, the practice setting, incentives, regulation,

and patient factors).

5. Examination of different intervention combinations and organizational

characteristics (such as type, size, ownership, teaching status, payer mix,

etc.) to identifying the most effective mix of combinations given a specific

set of organizational characteristics.

6. Examination of traits and characteristics to measure the level of cultural

integration in the organization.

7. Examination of rate of adoption over a longer period to fully explore the

question.

8. Examination of use of pathways over a longer time frame to determine if

clinical outcomes remain the same, improve or degrade.

94
BIBLIOGRAPHY

Abbott J, Hronek C, Mirecki JK. The leap to automating clinical pathways. Journal
of Healthcare Resource Management 13(6): 8-16, 1995.

Andersen R. A behavioral model of families' use of health services. Chicago, IL:


University of Chicago, Center for Health Administration Studies, 1974.

Andersen R. Revisiting the behavioral model and access to medical care: does it
matter? Journal of Health and Social Behavior 36: 1-10, 1995.

Andersson S. Scrippshealth: Quality planning for clinical processes of care. Quality


Letter for Healthcare Leaders 5(5): 4, 1993.

Avorn J, Soumerai SB, et al. A randomized trial of a program to reduce the use of
psychoactive drugs in nursing homes. New England Journal of Medicine 327: 168-
73, 1992.

Balas EA, Boren SA, Brown GD, et al. Effect of physician profiling on utilization:
meta-analysis of randomized clinical trials. Journal of General Internal Medicine
11: 584-590, 1996.

Banaszak P. Clinical quality improvement in a multihospital system. American


Journal of Medical Quality 8(2): 56-60, 1993.

Bandura, A.. Principles of Behavior Modification. New York: Holt, Rinehart &
Winston, 1969.

Bandura, A. Social Learning Theory. New York: General Learning Press, 1971.

Barnes RV, Lawton L, Briggs D. Clinical benchmarking improves clinical paths:


experience with coronary artery bypass. Joint Commission Journal on Quality
Improvement 20(5): 267-76, 1994.

95
Bateman TS and Organ DW. Job satisfaction and the good soldier: The relationship
between affect and employee citizenship. Academy of Management Journal 26,
587-595, 1983.

Berkey T. Benchmarking in health care: turning challenges into success. Joint


Commission Journal on Quality Improvement 20(5): 277-84, 1994.

Bernard AM, Hayward RA, Anderson JE, Rosevear JS. The integrated inpatient
management model: lessons for managed care. Medical Care 33(7): 663-75, 1995.

Bernstein, AB. Ready or not, here it comes: medical practices in the new
millennium. Seminars in Medical Practice 1(1): 2-6, 1998.

Bero LA. Closing the gap between research and practice: an overview of systematic
reviews of interventions to promote the implementation of research findings. British
Medical Journal 317: 465-468, 1998.

Berwick DM, Coltin KL. Feedback reduces test use in a health maintenance
organization. Journal of the American Medical Association 255: 1450-4, 1986.

Blau P and Scott, R. Formal organizations: A comparative approach. San


Francisco: Chandler, 1962.

Borman WC and Motowidlo SJ. (1993). Expanding the criterion domain to include
elements of contextual performance. In Schmitt N and Borman WC (Eds.),
Personality Selection (pp. 71 – 98). San Francisco: Jossey-Bass, 1993.

Brand DA, Quam L, Leatherman S. Medical-practice profiling: concepts and caveats.


Medical Care Research and Review 52 (2): 223-51, 1995.

Camp RC, Tweet AG. Benchmarking applied to health care. Joint Commission
Journal on Quality Improvement 20(5): 229-38, 1994.

96
Cassel C, Blank L, Braunstein G, et al. ABIM subcommittee on clinical competence
in women's health: what internists need to know; core competencies in women's
health. American Journal of Medicine 102: 507-512, 1997.

Clare M, Sargent D, Moxley R, Forthman T. Reducing healthcare delivery costs using


clinical paths. Journal of Healthcare Finance 21(3): 48-58, 1995.

Cleverley WO, Harvey RK. Critical strategies for successful rural hospitals.
Healthcare Management Review 17(1): 27-33, 1992.

Coffey RJ, Othman JE, Walters JI. Extending the application of critical path methods.
Quality Management in Healthcare 3(2): 14-29, 1995.

Curtis P, Skinner B, Varenholt J, et al. Papanicolaou smear quality assurance:


providing feedback to physicians. Journal of Family Practice 36: 309-312, 1993.

Davis DA, Thomson MA, Oxman AD, Haynes RB. Changing physician performance
a systematic review of the effect of continuing education. Journal of the American
Medical Association 274(9): 700-5, 1995.

Davis DA, Taylor-Vaisey A. Translating guidelines into practice: a systematic


review of theoretic concepts, practical experience and research evidence in the
adoption of clinical practice guidelines. Canadian Medical Association Journal 157:
408-416, 1997.

Donabedian A. Basic approaches to assessment: structure, process and outcome. In:


The definition of quality and approaches to its assessment: explorations in quality
assessment and monitoring. Ann Arbor, MI: Health Administration Press, 1980.

Ellrodt AG, Conner L, Riedinger M, Weingarten S. Measuring and improving


physician compliance with clinical practice guidelines. Annals of Internal Medicine
122(4): 277-82, 1995.

97
Emmons DW, Wozniak GD. Profiles and feedback: who measures physician
performance? In: Socioeconomic characteristics of medical practice. Chicago, IL:
American Medical Association, 1994.

Epstein AM. Changing physician behavior, increasing challenges for the 1990s.
Archives of Internal Medicine 151: 2147, 1991.

Ferdinand M. Reducing orthopedic implant costs: a physician-driven approach at


Mt Sinai Medical. Journal Healthcare Materiel Management 12(11): 20-25, 1994.

Festinger L. A theory of social comparison processes. Human Relations 7: 117-140,


1954.

Frazier LM, Brown JT, Divine GW, et al. Can physician education lower the cost of
prescription drugs? A prospective controlled trial. Annals of Internal Medicine 155:
116-21, 1991.

Freemantle N, Harvey EL, Wolf F, et al. Printed educational materials: effects on


professional practice and health care outcomes. Cochrane Review 3, 1999.

Goldfield N. Physician Profiling and Risk Adjustment, Second Edition.


Gaithersburg, Maryland: Aspen Publishers, Inc., 1999.

Goold SD, Hofer T, Zimmerman M, Hayward RA. Measuring physician attitudes


toward cost, uncertainty, malpractice, and utilization. Journal of General Internal
Medicine 9(10): 544-9, 1994.

Grimshaw JM, Russell IT. Effect of clinical guidelines on medical practice: a


systematic review of rigorous evaluations. Lancet 342: 1317-1322, 1993.

Hofer TP, Hayward RA, Greenfield S, et al. The unreliability of individual


physician report cards for assessing the costs and quality of care of a chronic
disease. Journal of the American Medical Association 281: 2098-2105, 1999.

98
Holmboe E, Scranton R, Sumption K, et al. Effect of medical record audit and
feedback on residents' compliance with preventive health care guidelines. Academic
Medicine 73: 901-903, 1998.

Holmboe ES, Hawkins RE. Methods for evaluating the clinical competence of
residents in internal medicine: a review. Annals of Internal Medicine 129: 42-48,
1998.

Johnson CC, Martin M, Epstein SM, Lee JD. The effect of a physician education
program on hospital length of stay & patient charges. Journal of the South Carolina
Medical Association 89(6): 293-301, 1993.

Johnson CC, Martin M. Effectiveness of a physician education program in reducing


consumption of hospital resources in elective total hip replacement. Southern
Medical Journal 89(3): 282-9, 1996.

Jones RA, Mullikin CW. Collaborative Care: Pathways to Quality Outcomes. Journal
for Healthcare Quality 16(4): 3, 1994.

Karuza J, Calkins E, Feather J, Hershey CO, Katz L. Enhancing physician adoption


of practice guidelines. Archives of Internal Medicine. 155(6): 625-32, 1995.

Kassirer JP. 1994. The use and abuse of practice profiles. New England Journal of
Medicine 330: 634-635.

Kim CS, Kristopaitis RJ, Stone E, et al. Physician education and report cards: do
they make the grade? Results from a randomized controlled trial. Am J Med 107:
556-560, 1999.

Kongstvedt PR. Managed health care handbook 3rd edition. Gaithersburg, MD:
Aspen Publications, 1996.

Kramolowsky EV, Wood NL, Rollins KL, Glasheen WP. Impact of physician
awareness on hospital charges for radical retropubic prostatectomy. Journal of
Urology 154(1): 139-42, 1995.

99
Lasker RD, Shapiro DW, Tucker AM. Realizing the potential of practice pattern
profiling. Inquiry 29: 287-297, 1992.

Lawson RD. Implementing an integrated program of resource management. Journal


for Healthcare Quality 17(3): 17-30, 1995.

Maiman LA, Becker MH. The health belief model: origins and correlates in
psychological theory. Health Education Monograph 2: 336-353. 1974.

Marton KI, Tul V, Sox HC. Modifying test-ordering behavior in the outpatient
medical clinic: a controlled trial of two educational interventions. Archives of
Internal Medicine 145: 816-25, 1985.

Massanari RM. Profiling physician practice: a potential for misuse. Infection


Control and Hospital Epidemiology 15(6): 394-6, 1994.

Norton PF, Shaw PA, Murray MA. Quality improvement in family practice:
program for Pap smears. Canadian Family Physician 43: 503-508, 1997.

Orav EJ, Wright EA, Palmer RH, et al. Issues of variability and bias affecting
multisite measurement of quality of care. Medical Care 34: SS87-SS101, 1996.

Organ DW. Organizational citizenship behavior. Lexington, MA: D.C. Heath and
Co, 1988.

Organ DW. The motivational basis of organizational citizenship behavior. Research


in Organizational Behavior, 12, 43-72, 1990.

Organ DW. Organizational citizenship behavior: It’s construct clean-up time.


Human Performance, 10 , 85-97, 1997.

Palmer RH, Hargraves JL. The ambulatory care medical audit demonstration
project: research design. Medical Care 34: S12-S28, 1996.

100
Pearson SD, Polak JL, Cartwright S, Mccabe-Hassan S. A critical pathway to
evaluate suspected deep vein thrombosis. Archives of Internal Medicine 155(16):
1773-8, 1995.

Physician Payment Review Commission Conference on Profiling Washington, DC:


Physician Payment Review Commission, Publication No. 92-2, 1992.

Pugh JA, Frazier LM, et al. Effect of a daily charge feedback on inpatient charges
and physician knowledge and behavior. Archives of Internal Medicine 149: 426-9,
1989.

Putnam RW, Campbell MD. Competence. In: Fox RD, Mazmanian PE, Putnam
RW, eds. Changing and learning in the lives of physicians. New York, NY: Praeger,
1989.

Rogers EM. Diffusion of innovations, 4th ed. New York, NY: Free Press, 1995.

Rosenstock IM. Historical origins of the health belief model. Health Education
Monograph 2: 328-335, 1974.

Salem-Schatz S, Moore G, Rucker M, Pearson SD. The case for case-mix


adjustment in practice profiling: when good apples look bad. Journal of the
American Medical Association 272(11): 871-4, 1994.

Schriefer J. The synergy of pathways and algorithms: two tools work better than
one. Joint Commission Journal on Quality Improvement 20(9): 485-99, 1994.

Scott WR. Institutions and organizations (second edition). Thousand Oaks CA:
Sage Publications, 2001.

Smith CA, Organ DW, and Near JP. Organizational citizenship behavior: It’s nature
and antecedents. Journal of Applied Psychology 68, 653-663, 1983.

101
Soumerai SB, Avorn J. Principles of educational outreach (academic detailing) to
improve clinical decision making. Journal of the American Medical Association
263: 549-556. 1990.

Soumerai SB, et al: Effect of local medical opinion leaders on quality of care for
acute myocardial infarction. Journal of the American Medical Association
279:1358–1363, 1998.

Spoeri RK, Ullman R. Measuring and reporting managed care performance: lessons
learned and new initiatives. Annals of Internal Medicine 127(8): 726-32, 1997.

Tajfel H, Turner JC. An integrative theory of social conflict. In Austin W and


Worchel S (eds.), The social psychology of intergroup relations. Monterey, CA:
Brooks/Cole, 1979.

Thompson RS. Systems approach and the delivery of health services. Journal of the
American Medical Association 277: 668-671, 1997.

Thomson MA, Oxman AD, Davis DA, et al. Audit and feedback to improve health
professional practice and health care outcomes: Part II. Cochrane Review, 1999.

Tierney WM, Miller ME, McDonald CJ. The effect on test ordering of informing
physicians of the charges for outpatient diagnostic tests. New England Journal of
Medicine 322(21): 1499-504, 1990.

Weingarten SR, Riedinger MS, Conner L, Lee TH. Practice guidelines and
reminders to reduce duration of hospital stay. Annals of Internal Medicine 120(4):
257-63, 1994.

Welch HG, Miller ME, Welch WP. Physician profiling: an analysis of inpatient
practice patterns in Florida and Oregon. New England Journal of Medicine 330(9):
607-12, 1994.

Winickoff RN, Coltin KL, et al. Improving physician performance through peer
comparison feedback. Medical Care 22(6): 527-34, 1984.
102
Appendix 1, Text of Cover Letter that Accompanied the Profile

Dear Colleague,

Attached is information regarding what we call "Best Practice" diagnoses. We have


targeted several all-payer refined diagnosis related groups (APR-DRGs) for best
practice initiatives because of their high cost or high volume. The goal of a best
practice initiative is to reduce variation in care, outcome, and cost by identifying
practice patterns with quality outcomes and appropriate resource utilization.
Statistical and qualitative analysis can be used to develop clinical paths which can
reduce variation.

The purpose of these reports is to enhance your awareness of these statistically


determined best practices, and to allow you to compare your practice patterns to
your peers in terms of outcomes and resource utilization. If we are to continue
serving our community, we must use our resources as effectively and efficiently as
we can. If we do this, the hospital will reduce the cost of care, while improving
quality. We believe that development and implementation of such things as best
practice guidelines, clinical pathways, and suggestions you may see when
comparing your information to your peers, play an important role in dealing with
these challenges.

Several committees composed of physicians and hospital personnel developed


clinical paths for identifying key success factors to deliver cost-conscious quality
care to our community. We believe that reducing the variation in the way we care
for similar patients could significantly improve outcomes, patient satisfaction and
reduce costs. This is not a matter of who's doing something right or better than a
colleague. It is simply looking at the difference and asking, "Can I do something
differently that enhances patient care by reducing unnecessary variability?"

We hope you find this information useful. We encourage you to discuss these
findings with your colleagues and to participate actively on a best practice
committee. These reports were produced by Decision Support Services and Quality
Management; if you have any questions regarding these reports, feel free to contact
<the Director of Quality Management> at <phone number>.

Sincerely,

<Name> <Name,> MD
President and Chief Executive Officer Vice President, Medical Affairs

103
Appendix 2, Text of User Guide

UNDERSTANDING YOUR REPORT

Each Report Contains:

1. Analysis by Attending Physician/Primary Surgeon


This report is alphabetically coded to preserve confidentiality. You will find your
code printed on a separate sheet. Information includes: Volume, Average Length of
Hospital Stay, Average Complexity of Illness, Discharge Status (Regular, Home
Health, SNF, and Expired), Readmission Rate (same patient within 30 days for the
same DRG), and average (mean) charges per case for these areas: Room and Board,
Surgery, Pharmacy, Laboratory, Radiology, Respiratory, Physical Medicine,
Supplies (SPD) and Other. At this time, charges are a proxy measure of cost; the
hospital is actively examining cost accounting systems for future implementation.

2. Hospital-Wide Outcomes
Examines all patients within the DRG by discharge status, readmission rate, and
admission source. Variables examined are length of stay complexity of illness,
charges, patient age, and mortality rates.

3. Analysis by Complexity of Illness (COI)


Examines patients based on the complexity of their illness. Complexity of Illness is
an index of case complexity from 1 (minor) to 4 (extreme). Charge opportunity
illustrates the potential reduction in charges that could be realized if all discharges
with a total charge were equal to the mean. Standard deviation of length of stay and
charges is provided to illustrate the range of variance from the average in practice
patterns. The higher the standard deviation, the more variance.

4. A Graph of Ancillary Charges by Complexity of Illness


Provides a visual depiction of ancillary services utilization (pharmacy, laboratory,
radiology, respiratory, and physical therapy/occupational therapy.

Comparing Your Outcomes with Peers


Physicians are ranked by average charge per case in descending order; this is not a
best-to-worst ranking. When comparing your statistics to others, you should try to
find a physician with an average COI that is similar to yours. Look at everything --
You may find that your LOS is higher than a comparable colleague, but your
outcomes (i.e., death rate and readmission) are better.

104
Medical Center Diagnostic Group 148: Major Small & Large Bowel Procs
Clinical Activity Profile Jan 95 thru Dec 98

Physician Name Number Specialty Medical Staff Department


Sample, Ima 1234 Surgery Surgery
Number of Cases by Minor Moderate Major Extreme Total Avg Discharge Status Death Rates
Complexity of Illness (COI-1) (COI-2) (COI-3) (COI-4) Cases COI You Peers You Peers
You 0 2 0 4 6 3.33 Regular 33% 55% Deaths (In Hospital) 2 1
0% 33% 0% 67% Home Health 17% 18% Percentage 33.3% 1.3%
Surgery 17 32 17 9 75 2.24 CEC 0% 14% Average Age 52.0 64.3
Department 23% 43% 23% 12% Outside SNF 0% 3% Average COI 4.00 4.00
All Discharges 17 32 18 9 76 2.25 Transfer 50% 1% National Death Rate: 4.8%
22% 42% 24% 12%

Complexity Adjusted Comparison with Peers


Ancillary Charges Length of Stay (Excludes CEC days)
25,000 21,546 16.00 14.90
19,433
20,000 14.00
11.83
15,000 13,197 12.00
10,509 10.00
9,198
10,000 7,970
8.00
4,749
5,000 3,254
6.00
262 634
0 4.00
RX LAB RAD RESP PM 2.00
0.00
Expected (Peers) Observed (You)
Expected (Peers) Observed (You)

Readmission Rates You Peers Telemetry Intensive Care Complexity Adjusted CEC Length of Stay
Within 30 Days, Same DRG 0% 0% Special Care Units You Peers You Peers
1.0
Within 30 Days, Any DRG 17% 9% Number of Patients 0 5 4 16
Within 60 Days, Any DRG 17% 13% Percent of Cases 7% 67% 23% 0.8
Within 90 Days, Any DRG 17% 14% Avg Days in Tele/Unit 6.0 8.5 4.1
0.6
Average Complexity 2.4 3.5 2.9
Surgery/Invasive Procs You Peers Complexity Adj ALOS 6.0 8.5 7.3 0.4
Average OR Time (Minutes) 173 175
0.2
Average PACU Time (Minutes) 128 162 0.0 0.0
Avg Blood Units Tranfused 5.17 1.87 0.0
Expected (Peers) Observed (You)
Infections per 1,000 Cases 0.0 71.4
Complications per 1,000 0.0 171.1 CEC days are not included in acute days
Sample, Ima Diagnostic Group 148: Major Small & Large Bowel Procs Page 2
Average Length of Stay in Days Avg Ancillary Charges by Complexity
7000

5750

5750
20.00 16.6 16.6
14.5 6000

C O I
15.00
11.1 11.2 5000
10.00 8.2 8.2
6.5 6.5 6.5 4000
5.00 3000

1504

1504
0.0 0.0
2000
0.00

842

842
1
1000

197
197
COI-1 COI-2 COI-3 COI-4

0
0

0
You Dept All 0
Rx Lab Rad Resp PM
HCFA Maximum Length of Stay: 10.0 Days
Cases Below/Equal to HCFA Max: 83% (You) 76% (peers)

10486
12000
COI-1 COI-2 COI-3 COI-4
10000

7849
7849
C O I
National Average LOS 7.01 8.78 12.73 19.73
8000

Most Frequent Diagnoses 6000

ICD-9 Description Cases 4000

2363
2363
1686
56211 Diverticuli Colon no Hem 11

950
950
2000

822
729
729
639
56081 Intestinal Adhes w Obstr 3

61
61
0
0
5570 Acute Vasc Insuff Intestine 2
1534 Mal Neo Cecum 8

10778
10867
12000
1536 Mal Neo Ascend Colon 5
10000

C O I
1540 Mal Neo Rectosigmoid Jct 2
8000
55221 Obstr Incisional Hernia 1
1533 Mal Neo Sigmoid Colon 5 6000

3897
3851
1541 Mal Neo Rectum 3 4000

1809
1801

1108
1071
V552 Atten To Ileostomy 2

3
2000
Most Frequent Procedures

130
123
0

0
0
ICD-9 Description Cases
4573 Right Hemicolectomy 19

28026
28026
30000

23907
4562 Part Sm Bowel Resect NEC 10
25000

19384
C O I
4576 Sigmoidectomy 13

16026
16026
20000
4575 Left Hemicolectomy 5

13336
13336
11112
4572 Cecectomy 5 15000

6804
8965 ABG 1 10000

5541
5541
4863 Anterior Rect Resect NEC 4
4

5000

952
609
609
4579 Part Lg Bowel Excis NEC 3
0
4652 Lg Bowel Stoma Closure 1
You Dept All
4574 Transverse Colon Resect 2

Vous aimerez peut-être aussi