Vous êtes sur la page 1sur 11

The Gerontologist

Vol. 45, No. 6, 720730

Copyright 2005 by The Gerontological Society of America

Do Trends in the Reporting of Quality Measures


on the Nursing Home Compare Web Site
Differ by Nursing Home Characteristics?
Jacqueline Zinn, PhD,1 William Spector, PhD,2 Lillian Hsieh, MHS,3
and Dana B. Mukamel, PhD3

Key Words: Nursing Home Compare, Medicare,


Consumer information, Quality measures

Address correspondence to Jacqueline Zinn, PhD, Professor, Department of Risk, Insurance and Healthcare Management, Fox School of
Business and Management, Temple University, 413 Ritter Annex,
Philadelphia PA 19122. E-mail: Jacqueline.Zinn@temple.edu
1
Department of Risk, Insurance and Healthcare Management,
Temple University, Philadelphia, PA.
2
Agency for Healthcare Research and Quality, Rockville, MD.
3
Center for Health Policy Research, University of California, Irvine.

720

Nursing homes are an important component of the


health care system serving older adults in the United
States. About 3 million elderly and disabled Americans received care in our nations nearly 17,000
Medicare- and Medicaid-certied nursing homes in
2001. Slightly more than half of these individuals
were long-term nursing home residents with longterm-care needs, but nearly as many had shorter stays
for rehabilitation care after an acute-care episode
(Nursing Home Quality Initiative, 2002).
Despite federal legislation enacted over a decade
ago to improve quality, public misgivings concerning
nursing home care persist, making nursing home
placement a difcult decision that many patients and
families would rather avoid (Harrington & Carillo,
1999; Institute of Medicine, 1986; U.S. General
Accounting Ofce, 1999). Lack of public information
about facility-specic nursing home quality has
contributed to the uncertainty surrounding this decision. Making information available to the public on
the performance of nursing facilities has been viewed
as an important step in enabling consumers to better
assess nursing facilities and to choose facilities
based on information about quality (Mukamel &
Spector, 2003).
In 1998, in an effort to assist consumers in
evaluating the quality of nursing facilities, the Centers
for Medicare and Medicaid Services (CMS) made
available to consumers the Nursing Home Compare
Web site (http://www.medicare.gov/NHCompare).
This site provides consumers with information about
Medicare- and Medicaid-certied nursing homes in
the United States. It includes the quality-deciency
citations issued by state inspectors, stafng levels, and
basic facility characteristics (Fermazin, Canady,
Bauer, & Cooper, 2003).
In November 2002, the CMS launched the Nursing
Home Quality Initiative, which introduced quality
measures (QMs) to supplement the information about
citations and stafng found on Nursing Home
Compare. These QMs are intended to provide
The Gerontologist

Downloaded from http://gerontologist.oxfordjournals.org/ by guest on December 1, 2014

Purpose: This study examines the relationship between the first set of quality measures (QMs)
published by the Centers for Medicare and Medicaid
Services on the Nursing Home Compare Web site
and five nursing home structural characteristics:
ownership, chain affiliation, size, occupancy, and
hospital-based versus freestanding status. Design
and Methods: Using robust linear regressions, we
examined the values of the QMs at first publication
and their change over the first five reporting periods,
in relation to facility characteristics. Results: There
were significant baseline differences associated with
these facility characteristics. Pain, physical restraints,
and delirium exhibit a clear downward trend, with
differences between the first QM reporting period
and the fifth ranging from 12.7% to 46.0%. However, there were only minimal differences in trends
associated with facility characteristics. This suggests
that the relative position of facilities on these measures did not change much within this time period.
The variation by facility type was larger for the shortstay QMs than for the long-stay measures. Implications: Those QMs that show an improvement exhibit
it across all types of facilities, irrespective of initial
quality levels. Although a number of alternatives may
explain this positive trend, the trend itself suggests that
report cards, to the extent that they are effective, are
so for all facility types but only some QMs.

Vol. 45, No. 6, 2005

721

The team that developed the current MDS indicators reported an association between a nursing
homes score on the pressure-ulcer-prevalence QM
and pressure-ulcer-care processes as evidence of the
validity of this QM (Morris, Moore, Jones, & Mor,
2002). However, several recent research studies,
although small and focused in a single region, have
called into question the accuracy of the QMs. For
example, Bates-Jensen, Cadogan, and colleagues
(2004) found no differences in nursing care in homes
reporting high and low pressure-ulcer prevalence in
16 Southern California nursing homes. Chu,
Schnelle, Cadogan, and Simmons (2004) found that
63% of residents with daily pain did not have
moderate or serious pain reported on the MDS. In
particular, residents with cognitive impairment were
signicantly less likely to have serious pain documented on the MDS. In addition, Cadogan, Schnelle,
Yamamoto-Mitani, Cabrera, and Simmons (2004)
found that facilities reporting higher prevalence of
pain were more accurate than those reporting lower
rates. Bates-Jensen, Schnelle, Alessi, Al-Samarai, and
Levy-Storms (2004) observed high levels of residents
being left in bed all day, at variance with what was
reported in the MDS. Schnelle, Bates-Jensen, Chu,
and Simmons (2004) noted inaccurate reporting
(mostly underreporting) in these and other careprocess areas. The potential for detection or
ascertainment bias may be due to differences in the
ability of clinical staff to detect patient status and
symptoms, particularly in clinical areas that are
harder to dene or diagnose such as pain, mood, or
early-stage pressure ulcers (Mor et al., 2003). Lack of
formal auditing to ensure the accuracy of MDS data
(with the exception of states that use MDS data as
a basis for payment) was cited as a validity issue in
a U.S. General Accountability Ofce report (GAO,
2002a).
Although conicting results preclude a denitive
judgment regarding the accuracy of QM reporting,
when used under proper research conditions, it appears that the MDS can yield reliable, valid, and
sensitive measures of clinical care processes, as well
as patient outcomes in important clinical dimensions.
However, it appears to have some shortcomings in
real-world applications (Mor, 2004; Sangl, Saliba,
Gifford, & Hittle, 2005). A recent article suggests
that at least some of these shortcomings could be
remedied. Arling, Kane, Lewis, and Mueller (2005),
after critically evaluating QIs from the perspective of
theory, measurement, and application, recommended
a number of strategies to improve the comprehensiveness, reliability, validity, and relevance for quality
assessment.
Risk-adjustment procedures also were a key area
of dispute in QM development. For example, two of
the three consultants hired by the National Quality
Forum recommended against the use of facility-level
risk adjustment at the time of publication until the
approach, along with other risk-adjustment methods,

Downloaded from http://gerontologist.oxfordjournals.org/ by guest on December 1, 2014

objective information that an individual can use as one


indication of how well the nursing home manages
various aspects of care provided to residents. The
development of the Nursing Home Compare QMs began with the creation of the data set used to calculate
the measures. The Minimum Data Set (MDS) was
designed by a consortium of academic medical centers
under contract with the Health Care Financing
Administration (now the CMS) between 1989 and
1991 (Mor, 2004). Although the MDS was originally
designed as a comprehensive clinical and functional
assessment instrument to improve the care of nursing
home residents, it has subsequently been applied to
a number of policy functions, including case-mix
reimbursement and quality monitoring (Mor).
In the early 1990s, under CMSs Nursing Home
Case Mix and Quality Demonstration, the Center
for Health Services Research and Analysis at the
University of WisconsinMadison was charged with
developing readily usable facility and quality indicators (QIs) based solely on computerized data from
the MDS resident-assessment instrument (Zimmerman et al., 1995). Validation testing of this initial
set of QIs demonstrated that the likelihood of quality
problems increased with the QI scores, and that
the QIs exhibited high levels of stability over two
consecutive measurements, although stability varied
with the characteristics of the QI (Karon, Sainfort,
& Zimmerman, 1999). Subsequently, the CMS
contracted with Abt Associates to review existing
QIs to determine if they were suitable for public
reporting. After cataloguing and evaluating 143
existing QIs, including those used by state surveyors,
Abt recommended 39 risk-adjusted QIs to the CMS;
22 of these already existed, and 17 were newly
developed. At about the same time, the CMS also
contracted with the National Quality Forum to
recommend a set of indicators to use in its planned
six-state pilot and to develop a core set of indicators
for national implementation in late 2002. The CMS
initiated a six-state pilot in April 2002, in which
facility-specic MDS-based QMs were promulgated
for every Medicare- or Medicaid-certied facility in
each state. In November 2002, the CMS rolled out
a slightly modied set of indicators across the
country, replicating the approach developed in the
pilot (Mor, 2004).
The development of the QMs was not without
controversy. Although there have been a number of
studies pointing to the reliability and validity of the
MDS data items used to construct the measures,
there have been other studies that dispute these
ndings (Mor, 2004). On one hand, two studies of
the reliability of MDS data conducted by the Ofce
of the Inspector General (OIG) found signicant
discrepancies between OIG reviewer audits and
facility records (OIG, 2001a, 2001b). On the other
hand, a large eld-reliability trial found that 85% of
MDS data elements manifest adequate interrater
reliability (Mor).

Description of Nursing Home QMs


The nursing home QMs are computed by the
CMS from resident-assessment data (the MDS) that
nursing homes routinely collect on all residents at
specied intervals during their stay. The assessment
records the residents physical and clinical conditions
and abilities, as well as life-care wishes. The methodology used in calculating the QMs is described in the
Quality Measures for National Public Reporting:
Users Manual (Abt Associates, 2002b). All of the
measures are dened as rates of positive or negative
health outcomes. Some have been adjusted to account, at least partially, for differences in case mix
across nursing facilities. All QMs exclude residents
who have missing MDS-assessment data elements
needed for calculation.
The 10 measures chosen for public reporting on
the Nursing Home Compare Web site from November 2002 to January 2004 covered two distinct
resident populations: postacute residents, who enter
a nursing home for a short stay following an acutecare hospitalization, and long-term-care residents,
who enter a nursing home for a long period of time
(typically until death) because they are no longer
capable of living in the community. QMs were
calculated for every facility in the United States.
Some QMs are risk adjusted by using resident-level
covariates, a facility admission prole, or both. In
January 2004, the Nursing Home Compare Web site
was updated with an enhanced set of 14 QMs
(http://www.cms.hhs.gov/nhqi/quality). In this study
722

we focus on the initial set of indicators that were


published in the rst ve reporting periods.
For the chronic-care (long-stay) measures, calculations are based on any resident with a full or
quarterly MDS survey in the target quarter beginning April 2002 through June 2002 (htpp://www.
cms.hhs.gov/nhqi/quality). Because sample size affects the stability of facility scores, nursing homes
with small resident numbers for a specic QM were
excluded from measure calculations (Sangl et al.,
2005). The facility had to have at least 30 cases
within the QM category during the target time
period to be included in this calculation. The six
long-stay resident measures consisted of the percentage of residents in physical restraints, or with infections, pain, pressure sores, pressure sores as
adjusted by the facility admission prole, or loss of
ability in daily tasks:
1. The percentage of residents with loss of ability in
daily tasks (Abt Associates, 2002a) measure reects the percentage of residents who have unexpected, sudden, or rapid loss of ability to do one
or more activities of daily living (ADLs). These
ADLs include feeding oneself, moving from one
chair to another, changing positions while in bed,
and going to the bathroom alone.
2. The percentage of residents with infections (Abt
Associates, 2002a) measure reects the percentage
of residents who, since being admitted to the facility, have acquired a new infection.
3. The percentage of residents with pain (Abt
Associates, 2002a) measure reports the percentage of residents with moderate pain daily, or
extreme pain at any frequency.
4. The percentage of residents with pressure sores
measure is self-explanatory.
5. The percentage of residents with pressure sores
as adjusted by the facility admission prole
(Abt Associates, 2002a) measure is also selfexplanatory.
6. The percentage of residents in physical restraints
(Center for Health System Research and Analysis,
2003) measure reports the percentage of residents
in the nursing home who are in physical restraints
daily.

For postacute measures, calculations were based


on any resident with a 14-day MDS report in the two
consecutive target quarters beginning January 2002
through June 2002. The facility had to have at least
20 cases during the target time period to be included
in this calculation (http://www.cms.hhs.gov/nhqi/
quality). The four postacute (short-stay) measures
included the percentage of postacute residents with
improved or maintained walking ability, or with
pain, delirium, or delirium as adjusted by the facility
admission prole:
1. The percentage of postacute residents with
delirium measure is self-explanatory.
The Gerontologist

Downloaded from http://gerontologist.oxfordjournals.org/ by guest on December 1, 2014

could be validated (GAO, 2002b). Consequently, the


nal recommendation of the GAO was that nationwide reporting be delayed until the validity of the
indicators selected could be ascertained and the
appropriateness of the risk-adjustment methodology
was better understood.
Despite the GAO recommendation, the CMS
moved forward with implementation in November
2002, citing the importance of public accountability
and ongoing efforts to improve the QMs. The set of
QMs included were based on the National Quality
Forum recommendations and had only limited risk
adjustment. The CMS advertised the introduction
of the QMs in major daily newspapers to help raise
awareness of the quality initiative throughout the
country and promote consumer use. The Medicare
Compare Web site has become one of the most
popular sites on Medicare.com (National Quality
Forum, 2004).
In this article, we provide an initial descriptive
analysis of the rst set of QMs that were published
on the Medicare Compare Web site between
November of 2002 and January 2004. Our objective
is to offer a rst look at how these reported measures
vary across nursing homes by facility type and
whether these measures changed over the rst year
of publication.

Table 1. Definition of Quality Measures Reported on Nursing Home Compare, November 2002 through January 2004
Quality Measure Denition
Chronic care
% of residents with an
unexpected loss of
function in some ADLs
% of residents
with infections
% of residents with pain
% of residents with
pressure sores
% of residents with pressure
sores (FAP adjusted)
% of residents in physical
restraints

% of short-stay residents
with delirium (FAP adjusted)
% of short-stay residents with
moderate pain at least daily
or horrible or excruciating
pain at any frequency
% of short-stay who
walk as well or
better on day 14
as on day 5 of stay

Resident Level Covariates

FAPb

Source

None

No

CHSRA

None

No

MEGA QI

No

MEGA QI

Admission assessment

Prior assessment of
independent or modied
independent decision making
None

No

CHSRA

Admission assessment

None

Yes

CHSRA

Admission assessment

None

No

CHSRA

Comatose, end-stage disease, or


hospice status (or status with
respect to above unknown)
Comatose, end-stage disease or
hospice status (or status with
respect to above unknown)
None

Prior residential
history preceding
the current SNF stay
Prior residential
history preceding
the current SNF stay
None

No

MEGA QI

Yes

MEGA QI

No

MEGA QI

Comatose, end-stage disease,


ventilator dependent,
quadriplegic, paraplegic, or
hospice status on day 14 of stay

None

Yes

MEGA QI

Comatose, end stage, hospice


status, or status unknown;
late-stage ADL status
Admission assessment; endstage disease or hospice
status (or status unknown)
Admission assessment

Notes: FAP = facility admission prole; ADL = activity of daily living; CHSRA = Center for Health Systems Research and
Analysis of the University of WisconsinMadison; MEGA QI = Mega quality indicator.
a
Other than missing or inconsistent data, and/or failure to trigger quality measure.
b
FAP is measured at the facility level, and once calculated, it is the same for every resident in that facility. The FAP reects the resident population admitted during a 12-month period and is intended to capture the admitting characteristics of individual facilities.

2. The percentage of postacute residents with


delirium as adjusted by the facility admission
prole (Abt Associates, 2002a) measure is also
self-explanatory.
3. The percentage of postacute residents whose
walking ability has improved, or been maintained, since admission (Abt Associates, 2002a)
measure is the percentage of postacute residents
who walked as well or better on Day 14 as on
Day 5 of their stay. This is the only reported
measure in which higher percentages are better.
4. The percentage of postacute residents with pain
(as already described under long-stay measures)
measure is self-explanatory.

The denition, exclusions, resident-level covariates, or facility-admission-prole adjustment and


source for each of this rst set of QMs are
summarized in Table 1.
The CMS released the results of a preliminary
analysis on changes in quality performance over
the rst year of Nursing Home Compare reporting,
indicating changes in several measures, most
Vol. 45, No. 6, 2005

723

notably declines in the rate of reported pain


(Medical NewsService.com, 2004). Our study expands on these preliminary ndings by examining
whether the reported QMs recorded for the rst
reporting period and trends in these measures over
the next four periods differ by facility characteristics that are known from prior studies to be
associated with quality of nursing home care. These
characteristics include the following: (a) ownership status (Aaronson, Zinn, & Rosko 1994;
Angelelli, Mor, Intrator, Feng, & Zinn 2003; Arling,
Nordquist, & Capitman, 1987; Cohen & Dubay,
1990; Harrington, Woolhandler, Mullan, Carrillo,
& Himmelstein, 2001; Harrington, Zimmerman,
Karon, Robinson, & Beutel, 2000; Mor et al., 2003;
Spector, Seldon, & Cohen 1998), (b) chain afliation (Greene & Monahan, 1981; Harrington et al.,
2000, 2001), (c) size (Banaszak-Holl, Zinn, & Mor,
1995; Davis 1991; Greene & Monahan), (d) occupancy rate (Greene & Monahan; Hughes, Lapane,
& Mor, 2000), and (e) hospital-based versus
freestanding status (Shaughnessy, Schlenker, &
Kramer, 1990).

Downloaded from http://gerontologist.oxfordjournals.org/ by guest on December 1, 2014

Postacute care
% of short-stay residents
with delirium

Exclusionsa

Methods
Source of Data
We downloaded data from the Nursing Home
Compare Web site beginning with the rst reporting
period that was published in November 2002. Every
3 months, the CMS updates the data on the Web site.
The last data download for this analysis was January
2004. In our data set, we have quality measures for
ve time periods, with each QM based on MDS data
spanning the previous 3 months. Because of the lag
time associated with MDS data collection and
submission to Nursing Home Compare, the ve
time periods represent data from April 2002 to June
2002, July 2002 to September 2002, October 2002
to December 2002, January 2003 to March 2003,
and April 2003 to June 2003.

To interpret the results from this regression, note


that the values for QMi,t for facilities that are not
part of a chain are given by setting Chain = 0 and for
facilities that are in a chain by setting Chain = 1.
Therefore, a is the average value of QMi,t at t = 0,
that is, at baseline for nonchain facilities, and b is the
amount that reported quality changes each
time period for nonchain facilities. Here c is the
difference in QMi,t at t = 0 between chain and
nonchain facilities, and d is the difference in the
linear trend over time between chain and nonchain
facilities. Thus, if c and d are not signicantly
different from zero, we conclude that there are no
signicant differences between the reported QM for
chain and nonchain facilities at baseline and over
time, respectively.

Results

We measured chain afliation and hospital-based


versus freestanding status as dichotomous variables
(0, 1). We trichotomized occupancy rate as high if
the facility was in the upper quartile of the
occupancy distribution (95% occupancy), medium
if between the rst and third quartile (7894%
occupancy), and low if in the bottom quartile of the
occupancy distribution (less than 78% occupancy).
We classied nursing home ownership status as forprot, nonprot, and public (governmentally operated). We trichotomized size as small (fewer than 60
beds, the bottom quartile of the size distribution),
medium (between 60 and 124 beds, between the rst
and third quartiles) and large (more than 125 beds, in
the upper quartile of the size distribution).

Nursing Home Characteristics and QM


Performance at the Initial Reporting Period
(November 2002)

Statistical Analyses
We performed robust linear regression analyses to
determine if the score at baseline and the pattern of
change in these measures over the rst ve reporting
periods were related to nursing home characteristics.
We estimated separate models for each QM and for
each facility characteristic of the following form
(using chain afliation as an example):
QMi;t a b 3 t c 3 Chaini;t d 3 t 3 Chaini;t
Each model included a time variable (t) to capture
the average trend in the QM over the 15-month
period. Each model also included one of the ve
facility characteristics to allow us to test the
hypothesis that at Time 0that is, time of the rst
publicationfacilities that differ by this characteristic have different QM scores (the Chain dummy
variable in the equation indicates whether the facility
is in a chain or not). Lastly, each model included an
interaction of time and facility characteristic (t 3
Chain). This last term allows us to test the hypothesis that the trend was different for facilities with
different characteristics.
724

Table 2 presents a comparison of the six long-stay


resident QM scores by facility type at the time of the
rst publication (baseline), based on predicted values
from the regressions. Within each organizational
category, the differences in QM score ranged from
3% to 26%, depending on type, with an average
difference across all categories of 11.8%. For
example, the percentage of residents with pain is
17.6% higher in hospital-based facilities than in
freestanding facilities [(12.63  10.74)/10.74 =
17.6%]. With a few exceptions, nonprot, nonchain,
smaller sized, and high-occupancy facilities reported
better QM scores at the initial reporting period.
Hospital-based facilities reported better scores only
for physical restraints and infections. Public facilities
were similar to nonprot facilities and generally
reported better scores than for-prot facilities. The
results for the facility-admission-prole adjusted and
nonadjusted pressure-sore measures were consistent
except that the effect of size was lower for the riskadjusted measure. The main exceptions occurred
with pain: medium- and large-size and freestanding
facilities report better QM scores and public facilities
report worse results.
Table 3 presents the reported mean QM scores at
baseline for the short-stay resident QMs by facility
characteristics. Compared with the long-stay measures, the range of effect sizes was larger for shortstay residents (from about 5% to 62%), averaging
24.9% across all categories. The largest differences
were for delirium and pain. For example, the percentage of short-stay residents with pain at baseline
is 62% higher in hospital-based facilities than in
freestanding facilities [(39.2  24.32)/24.32 = 62%].
For short-stay residents, the pattern of association
with facility characteristics for delirium and pain
The Gerontologist

Downloaded from http://gerontologist.oxfordjournals.org/ by guest on December 1, 2014

Variable Measurement

Table 2. Mean Quality Measure Scores at Baseline by Nursing Home Characteristics for Long-Stay Residents
Mean % of Residents With:
Nursing
Home
Characteristic

Facilities With
Characteristic
(%; N 16,559)

13,342

Pressure
Sores

13,971

Pressure
Sores,
Risk
Adjusted

13,971

Pain

13,801

Physical
Restraints

13,972

Infection

13,481

65
29
6

15.33
15.76*
14.65

8.71
7.82*
6.94*

8.62
8.07*
7.29*

10.67
11.02
11.83*

10.6
8.00*
8.84*

14.80
14.06*
13.50*

46
54

14.76
15.91*

8.11
8.62*

8.15
8.61*

10.54
11.05*

9.74
9.93

13.73
15.2*

24
51
25

15.32
15.49
15.25

7.22
8.30*
9.09*

7.50
8.40*
8.83*

11.87
11.08*
9.87*

8.77
10.16*
9.69*

14.03
14.87*
14.13

24
45
30

15.32
15.55
15.23

8.56
8.63
7.95*

8.72
8.56
8.02*

11.19
10.89
10.52*

11.14
9.72*
9.25*

14.58
14.59
14.47

89
11

15.40
15.45

8.38
8.66

8.40
8.50

10.74
12.63*

9.95
7.67*

14.59
13.78*

Notes: QM = quality measure. Means and statistical signicance are based on the estimated regression models.
Percentages do not add to 100 because there were 205 facilities with missing occupancy data.
*Indicates signicant difference with respect to the reference category (listed rst) at the p  .01 level.

were similar. Generally, for-prot, medium- and


large-sized, medium- and high-occupancy, chain,
and freestanding facilities reported better outcomes.
Fewer differences were shown for the walking
measure, although nonchain and nonprot facilities
reported better scores.
The pattern of results for short-stay residents was
different from ndings for long-stay residents for
some facility characteristics. For example, although
public and nonprot facilities had better scores on
long-stay measures, for-prot facilities did better on
short-stay measures, with the exception of the
measure for walking ability. In addition, whereas
nonchain facilities did better on most long-stay
measures, chain facilities did better on pain for
short-stay residents. Thus, we do not see a consistent
effect across all measures for either ownership status
or chain afliation. However, the effects of size,
occupancy, and hospital-based status were similar
for the two categories of measures.
Tables 2 and 3 also report the number of facilities
reporting on each QM. About 60% of facilities
report the short-term QMs and 80% report the longterm QMs. QMs are not reported when a facility did
not have at least 30 residents meeting the criteria
for calculating the specic QM. Thus, small facilities
or facilities that do not care for specic types of
residents (e.g., short-term postacute patients) are
excluded from the report card and from our analysis.
Vol. 45, No. 6, 2005

725

Trends in QMs Over Time


Although the statistical analysis identied signicant trends at the 0.01 level for all but one of the
QMs, an inspection of Figure 1 shows that, for some
QMs, no real trend is apparent and the overall
change between the QM at the rst publication and
the QM at the fth are minimal. Pain (both long
term and short term), physical restraints, and
delirium (with and without risk adjustment) exhibit
a clear downward trend, with differences between
the rst QM reporting period and the fth ranging
from 12.7% to 46.0%. Thus, those measures that
show a clear trend are all declining and the decline
seems substantial, given the relatively short period
(15 months) over which they are measured. The
other measures (pressure sores, infections, walking
ability, and loss in ADLs) rst decline and then
increase, and the overall change in the QM does not
exceed 6%. Therefore, we believe it is premature to
conclude that there is a trend in these latter
measures.
The regression analyses allowed us to examine
differences in trends by facility characteristics.
Among the ve QMs that exhibit a signicant
decline (e.g., improvement in long- and short-stay
pain, physical restraints, and the two delirium measures), relatively few show a signicant difference
in the trend by facility characteristic. Of the 40
comparisons, only 8 were statistically signicant.

Downloaded from http://gerontologist.oxfordjournals.org/ by guest on December 1, 2014

No. of
facilities with
scores per QM
Ownership
For prot
Nonprot
Public
Chain
No
Yes
Size
Small
Medium
Large
Occupancya
Low
Medium
High
Hospital based
No
Yes

Loss of
Basic Daily
Tasks

Discussion
The CMS began publicly reporting QMs as part
of its quality initiative for nursing homes, with the
expectation that the information would be useful to
consumers when choosing a nursing home and for
facilities in their quality-improvement efforts. This
article provides an early examination of the association between initial reports of these measures and
early trends by common facility characteristics. We
have three important ndings: (a) there are strong
associations between baseline differences and organizational characteristics; (b) there are clear time
trends for 5 of the 10 measures, and all are toward
improved outcomes (i.e., lower prevalence rates of
adverse outcomes); and (c) in general, most organizational characteristics reviewed herein are not
associated with differences in trends over time, except for a few instances. Furthermore, in these cases,
any differences associated with facility characteristics observed at the beginning narrow with time.
Our ndings that QMs differ at baseline by
organizational characteristics are consistent with
prior research showing that facility attributes are
associated with quality of care (Greene & Monahan,
1981; Spector et al., 1998; Spector & Takada, 1991;
Zinn, Aaronson, & Rosko, 1993). In this study, forprot and chain-afliated facilities appear to do
better on QMs (pain and delirium) for short-stay,
postacute residents. Small, independent, nonprot,
high-occupancy facilities had better reported performance on QMs for long-stay residents. This suggests
that some types of facilities may specialize in
different aspects of nursing home care.
The trend toward improvement in ve of the QMs
suggests that the Medicare Compare report card, and
possibly other aspects of the CMS Nursing Home
Quality Initiative, may have the expected impact of
726

Table 3. Mean Quality Measure Scores at Baseline by


Nursing Home Characteristics for Short-Stay Residents
Mean % of Residents With:
Nursing Home
Characteristic
No. of facilities
with scores
per QM
Ownership
For prot
Nonprot
Public
Chain
No
Yes
Size
Small
Medium
Large
Occupancy
Low
Medium
High
Hospital based
No
Yes

Delirium

Delirium,
Risk
Adjusted

9,806

9,806

Pain

9,928

Walk as
Well or
Better

9,164

3.51
4.38*
4.59*

3.5
3.93*
4.22*

24.06
30.37*
29.96*

29.54
31.59*
30.38

3.86
3.74

3.63
3.65

26.82
25.42*

31.15
29.45*

5.22
3.68*
3.39*

4.68
3.53*
3.39*

35.84
25.07*
23.40*

30.57
30.18
29.89

4.43
3.73*
3.45*

4.18
3.58*
3.36*

28.91
25.47*
24.80*

30.03
30.16
30.15

3.57
5.58*

3.48
4.96*

24.32
39.32*

30.09
30.45

Notes: QM = quality measure. Means and statistical


significance are based on the estimated regression models.
*Indicates signicant difference with respect to the reference
category (listed rst) at the p  .01 level.

improving nursing home care. However, the fact that


only some measures exhibit a distinct trend raises the
question of what distinguishes these measures from
those that do not show a trend. Because there are
a number of alternative explanations that could account for the observed positive trends on some QMs,
we can only speculate on this issue. One possibility is
that nursing homes that are operating with limited
resources target those QMs with the highest rates,
indicating the areas requiring the most attention.
Indeed, the prevalence of short-term resident pain,
one of the measures showing a distinct downward
trend, had one of the highest prevalence rates, at
over 25%. In contrast, the walking ability measure
for short-term residents, with a prevalence of about
30%, did not exhibit any improvement. The lack of
improvement in walking and in pressure sores may
be because it is more difcult for nursing homes to
affect these outcomes, especially within the short
run, that is, the 15 months for which we have
examined the trends. This suggests that the lack of
a distinct trend toward improvement may not be due
to lack of effort but rather to other factors, such as
ability to affect changes in the short run. Finally, as
previously noted, at least one study found no
differences in nursing care in homes reporting high
and low pressure ulcer prevalence (Bates-Jensen,
Cadogan, et al., 2004). This suggests that the level of
The Gerontologist

Downloaded from http://gerontologist.oxfordjournals.org/ by guest on December 1, 2014

These are shown in Figure 2. An inspection of these


charts shows that although the differences in the rate
of decline may be substantial, the trend lines never
cross. Nevertheless, differences in reported measures
are typically reduced by the fth reporting period. In
three cases (delirium by occupancy rate, short-stay
pain by chain status, and long-stay pain by hospital
status), the decline in differences associated with
facility characteristic is notable. In the delirium case,
facilities with low occupancy reported about a 25%
higher rate at onset than did high-occupancy
facilities, but by the end of the year, the difference
between low- and high-occupancy levels was reduced
to about 15%. For pain (short stay) at the rst
reporting period, nonchain facilities reported a 4%
higher rate than did chain facilities, declining to 2%
by the end of the year. For pain (long stay) at the rst
reporting period, hospital-based facilities reported
a 13% higher rate than did freestanding facilities,
but this declined to 6% by the end of the year.

Downloaded from http://gerontologist.oxfordjournals.org/ by guest on December 1, 2014

Figure 1. Trends in quality measures.

pressure ulcer care provided in some homes may not


be good enough to make a difference.
It is also possible that some of the observed
improvements reect changes in data-reporting
practices rather than actual improvements in care.
The potential for detection or ascertainment bias
may be due to differences in the ability of clinical
staff to detect patient status and symptoms, particularly in clinical areas that are harder to dene or
diagnose such as pain, mood, or early-stage pressure
Vol. 45, No. 6, 2005

727

ulcers (Mor et al., 2003). Pain in particular may be


subject to reporting bias. A recent study found that
pain was underassessed in facilities lacking pain
specialists, suggesting that the documentation of low
levels of pain may actually reect incomplete
assessment (Wu, Miller, Lapane, & Gozalo, 2003).
Standards for the assessment of pain may vary as
a function of differences in training in addition to
underlying biological differences (Mor et al., 2003).
Finally, similar to DRG creep (i.e., coding patients

Downloaded from http://gerontologist.oxfordjournals.org/ by guest on December 1, 2014

Figure 2. Trends in quality measures by facility characteristic. The results shown for delirium are unadjusted rates.

in the clinical category that generates the most


revenue) observed in hospitals after the implementation of prospective payment, positive changes in pain
and delirium may reect perverse incentives to
upcode in areas that are more difcult to audit.
The current set of QMs published on the Nursing
Home Compare Web site are different from the
original set we report on in several respects: new
728

measures have been added, some from the original


set have been eliminated, and, for some, risk adjustment was modied. The publication of the QMs
was intended to be an evolutionary process. In
commenting on the recommendations of the GAO
report, the CMS reiterated its commitment to continuously improve the indicators as more evidence of
validity and reliability becomes available. It is probThe Gerontologist

Vol. 45, No. 6, 2005

729

that go beyond whether nursing home quality has


improved under the Nursing Home Quality Initiative to provide insights into how any gains were
achieved and how improvements could be made in
the future.

References
Aaronson, W., Zinn, J., & Rosko, M. (1994). Do for-prot and not-for-prot
facilities behave differently? The Gerontologist, 34, 775786.
Abt Associates. (2002a). MegaQI covariate analysis and recommendations:
Identification and evaluation of existing quality indicators that are
appropriate for use in long-term-care settings. Retrieved November 21,
2004, from http://www.cms.hhs.gov/quality/nhqi/CovariatesQPM.pf
Abt Associates. (2002b). Quality measures for national public reporting:
Users manual. Washington, DC: Sponsored by the Centers for Medicare
and Medicaid Services, U.S. Department of Health and Human Services.
Abt Associates, HRCA Research and Training Institute, & Brown University.
(2002, August 2). Validation of long-term and post-acute care quality
indicators (Final draft report prepared for the Centers for Medicare and
Medicaid Services, Ofce of Clinical Standards and Quality). Washington, DC: U.S. Department of Health and Human Services.
Angelelli, J., Mor, V., Intrator, O., Feng, Z., & Zinn, J. (2003). Oversight of
nursing homes: pruning the tree or just spotting bad apples? The
Gerontologist, 43(Special Issue 2), 6775.
Arling, G., Kane, R. L., Lewis, T., & Mueller, C. (2005). Future development
of nursing home quality indicators. The Gerontologist, 45, 147156.
Arling, G., Nordquist, R. H., & Capitman, J. A. (1987) Nursing home cost
and ownership type: Evidence of interaction effects. Health Services
Research, 22, 255269.
Banaszak-Holl, J., Zinn, J. S., & Mor, V. (1995). The impact of market and
organizational characteristics on nursing care facility service innovation:
A resource dependency perspective. Health Services Research, 31,
97118.
Bates-Jensen, B. M., Cadogan, M., Osterweil, D., Al-Samarai, N. R., Jorge,
J., Grbic, V., et al. (2004). The minimum data set pressure ulcer indicator:
Does it reect differences in care processes related to pressure ulcer
prevention and treatment in nursing homes? Journal of the American
Geriatrics Society, 51, 12031212.
Bates-Jensen, B. M., Schnelle, J. F., Alessi, C. A., Al-Samarai, N. R., &
Levy-Storms, L. (2004). The effects of stafng on in-bed times of nursing
home residents. Journal of the American Geriatrics Society, 52,
931938.
Cadogan, M. P., Schnelle, J. F., Yamamoto-Mitani, N., Cabrera, G., &
Simmons, S. F. (2004). A minimum data set prevalence of pain quality
indicator: Is it accurate and does it reect differences in care processes?
Journal of Gerontology: Medical Sciences, 59A, M281M285.
Center for Health System Research and Analysis. (2004). University of
Wisconsin. Retrieved November 21, 2004, from http://www.chsra.
wisc.edu
Chu, L., Schnelle, J. F., Cadogan, M. P., & Simmons, S.F. (2004). Using the
minimum data set to select nursing home residents for interview about
pain. Journal of the American Geriatrics Society, 52, 20572061.
Cohen, J. W., & Dubay, L.C. (1990). The effect of Medicaid reimbursement
method and ownership on nursing home costs, case mix, and stafng.
Inquiry, 27, 183200.
Davis, M. A. (1991). On nursing home quality: A review and analysis.
Medical Care Review, 48, 129166.
Fermazin, M., Canady, M., Bauer, K., & Cooper, L. (2003). Nursing Home
Compare: Web site offers critical information to consumers, professionals. Lippincotts Case Management, 8, 175183.
Greene, V. L., & Monahan, D. J. (1981). Structural and operational factors
affecting quality of patient care in nursing homes. Public Policy, 29,
399415.
Harrington, C., & Carrillo, H. (1999). The regulation and enforcement of
federal nursing home standards, 19911997. Medical Care Research and
Review, 56, 471494.
Harrington, C., Woolhandler, S., Mullan, J., Carrillo, H., & Himmelstein,
D. U. (2001). Does investor ownership of nursing homes compromise
the quality of care? American Journal Public Health, 91, 14521455.
Harrington, C., Zimmerman, D., Karon, S. L., Robinson, J., & Beutel, P.
(2000). Journal of Gerontology: Social Sciences, 55B, S278S286.
Hughes, C. M., Lapane, K. L., & Mor, V. (2000). Inuence of facility
characteristics on use of antipsychotic medications in nursing homes.
Medical Care, 38, 11641173.
Institute of Medicine. (1986). Improving the quality of care in nursing
homes. Washington DC: National Academy Press.

Downloaded from http://gerontologist.oxfordjournals.org/ by guest on December 1, 2014

ably a good thing to rotate the published QMs to


avoid teaching to the test incentives, that is, incentives to focus only on improving the published
QMs to the possible detriment of other areas of
patient care. The downside of changing QMs is that
we will not be able to follow trends for the
eliminated QMs in the future.
The limited association between organizational
characteristics and improvement trends raises concerns that the gaps in outcomes by facility type at
baseline may persist. An obvious objective for report
cards is to provide incentives for all facilities to
improve quality, and in particular for those who lag
behind. In a few instances we indeed observe that
gaps are closing. For example, the differences in pain
among short-stay residents between chain and nonchain facilities decreased over the period. However,
most of the gaps did not close. Researchers should
consider several caveats when interpreting these
ndings. It is not clear whether the changes observed
over time are attributable to quality improvement or
to changes in data reporting over time. For example,
is the decline in the reported prevalence of pain
attributable to better treatment of pain or to less
diligence by nursing homes in documenting pain? In
addition, any trend in quality improvement may
have preceded, rather than have been precipitated
by, QM publication. One would have to examine
values of these measures for periods prior to
publication to determine if Nursing Home Compare
reporting is inuencing observed trends. In addition,
because the QMs are outcome measures, their
validity depends on whether they have been appropriately adjusted for differences in case mix. If the
measures are not appropriately adjusted, then
nursing homes could improve their QM score by
changing their admission policies toward a less frail
population that may be at lower risks for these
adverse outcomes. In addition, if facilities do not
measure the QMs comparably, measurement bias
may inuence the ndings. Thus, researchers must
pursue further studies to address these questions in
order to establish the effectiveness of the reported
measures in quality improvement. Nevertheless,
report cards clearly do serve an important function
in focusing attention on where quality improvement
is needed for facilities, regulators, and consumers.
Finally, it is important to understand how
facilities that have successfully made improvements
were able to make these strides. What were the most
important changes in the quality-assurance process
that improved QM scores? Were the efforts of the
quality improvement organizations instrumental in
effecting improvements? Were facilities that worked
with the quality improvement organizations to
improve specic QMs successful? How can we
change quality-assurance programs for ADL, walking, and pressure-sores QMs where no declining
trends have occurred? The answers to these questions suggest the need for in-depth evaluation studies

730

Accuracy of nursing home medical record information about care-process


delivery: Implications for staff management and improvement. Journal of
the American Geriatrics Society, 52, 13781383.
Shaughnessy, P. W., Schlenker, R. E., & Kramer, A. M. (1990). Quality of
long-term care in nursing homes and swing bed hospitals. Health
Services Research, 25, 6596.
Spector, W. D., Seldon, T. M., & Cohen, J. W. (1998). The impact of
ownership type on nursing home outcomes. Health Economics, 7,
639653.
Spector, W. D., & Takada, H. A. (1991). Characteristics of nursing
homes that affect resident outcomes. Journal of Aging and Health, 3,
427454.
U.S. General Accountability Ofce. (1999). Nursing homes: Additional steps
needed to strengthen enforcement of federal quality standards (Report
to the Special Committee on Aging, U.S. Senate, Publication No. GAO/
HEHS-99-46). Washington DC: Author.
U.S. General Accountability Ofce. (2002a). Nursing homes: Federal efforts
to monitor resident assessment data should complement state efforts
(Publication No. GAO-02-279). Washington, DC: Author.
U.S. General Accountability Ofce. (2002b). Nursing homes: Public
reporting of quality indicators has merit but national implementation
is premature (Publication No. GAO-03-187). Washington, DC: Author.
Wu, N., Miller, S. C., Lapane, K., & Gozalo, P. (2003). The problem of
assessment bias when measuring the hospice effect on nursing
home residents pain. Journal of Pain Symptom Management, 26,
9981012.
Zimmerman, D. R., Karon, S. L., Arling, G., Clark, B. R., Collins, T., Ross,
R., et al. (1995). Development and testing of nursing home quality
indicators. Health Care Financing Review, 16, 107127.

Received November 12, 2004


Accepted June 28, 2005
Decision Editor: Linda S. Noelker, PhD

The Gerontologist

Downloaded from http://gerontologist.oxfordjournals.org/ by guest on December 1, 2014

Karon, S. L., Sainfort, F., & Zimmerman, D. R. (1999). Stability of nursing


home indicators over time. Medical Care, 37, 570579.
MedicalNewsService.com. (2004). Enhanced set of quality measures now
available at Medicares easier-to-use Nursing Home Compare. Retrieved
March 10, 2004, from medicalnewsservices.com
Mor, V. (2004). A comprehensive clinical assessment tool to inform policy
and practice: Applications of the minimum data set. Medical Care, 42,
III50III59.
Mor, V., Berg, K., Angelelli, J., Gifford, D., Morris, J., & Moore, T. (2003).
The quality of quality measurement in U.S. nursing homes, The
Gerontologist, 43, 3746.
Morris, J. N., Moore, T., Jones, R., & Mor, V. (2002). Executive summary.
Validation of long-term and post-acute care quality indicators. CMS
contract no. 500-95-0062/T. O. #4. Baltimore: CMS Ofce of Clinical
Standards and Quality.
Mukamel, D. B., & Spector, W. D. (2003). Quality report cards and nursing
home quality. The Gerontologist, 43(Special Issue 2), 5866.
National Quality Forum. (2004). National voluntary consensus standards for
nursing home care: A consensus report. (Report No. NQFCR-06-04).
Washington, DC: U.S. Department of Health and Human Services.
Nursing Home Quality Initiative. (2002, November). Overview. Baltimore,
MD: Centers for Medicare and Medicaid Services.
Ofce of the Inspector General. (2001a). Nursing home assessment resource
utilization groups. Washington, DC: U.S. Department of Health and
Human Services.
Ofce of the Inspector General. (2001b). Nursing home resident assessment
qualities of care. Washington, DC: U.S. Department of Health and Human
Services.
Ofce of the Inspector General. (2004, June). Inspection results on Nursing
Home Compare: Completeness and accuracy (Report No. OEI-0100130). Washington, DC: U.S. Department of Health and Human Services.
Sangl, J., Saliba, D., Gifford, D. R., & Hittle, D. F. (2005). Challenges in
measuring nursing home and home health quality: Lessons from the rst
national healthcare quality report. Medical Care, 43, I24I32.
Schnelle, J. F, Bates-Jensen, B. M., Chu, L., & Simmons, S. F. (2004).

Vous aimerez peut-être aussi