Académique Documents
Professionnel Documents
Culture Documents
14
a b s t r a c t
Keywords:
Systematic review Meta-analysis is a statistical procedure that integrates the results
Meta-analysis of at least two independent studies. The biggest threats to meta-
Chronic pain analysis are publication bias due to missing studies with nega-
Efcacy tive results and low-quality evidence due to methodological lim-
Placebo response itations imposed by included studies. Tools to improve the quality
Nocebo response
of meta-analysis have been developed by the Cochrane Collabo-
ration and by the Preferred Reporting Items for Systematic Re-
views and Meta-Analyses (PRISMA). Meta-analyses of trials have
demonstrated that pain responses in patients with chronic pain,
following treatment, are not normally distributed but have a
bimodal distribution with the majority of patients having either
very little or very good pain relief. The benet can be detected
within 2e4 weeks following drug administration. Further, the ef-
cacy of drug and physical treatments is hampered by high pla-
cebo response rates, with modest average benets with active
treatments over placebo in both parallel and crossover design
trials.
2015 Elsevier Ltd. All rights reserved.
* Corresponding author. Department of Internal Medicine 1, Klinikum Saarbrcken, Winterberg 1, D-66119 Saarbrcken,
Germany. Tel.: 49 681 9632020; fax: 49 681 9632022.
E-mail address: whaeuser@klinikum-saarbruecken.de (W. H
auser).
http://dx.doi.org/10.1016/j.berh.2015.04.021
1521-6942/ 2015 Elsevier Ltd. All rights reserved.
user W, To
Please cite this article in press as: Ha lle TR, Meta-analyses of pain studies: What we have
learned, Best Practice & Research Clinical Rheumatology (2015), http://dx.doi.org/10.1016/
j.berh.2015.04.021
2 user, T.R. To
W. Ha lle / Best Practice & Research Clinical Rheumatology xxx (2015) 1e16
Introduction
Currently, most clinicians nd that the amount of information in the medical literature is currently
overwhelming. New studies are constantly being published, and clinicians are nding it nearly
impossible to stay current, even in their own area of specialty [1]. Increasing numbers of review articles
are therefore published that summarize the literature on a given topic to keep researchers and clini-
cians up to date. A narrative review using both informal and subjective methods to collect and interpret
the evidence is often written by experts in the eld. By contrast, a systematic review (SR) is a critical
assessment and evaluation of all research studies that address a particular clinical issue using specic
criteria that provide a validated and organized method of locating, assembling, and evaluating the body
of literature on a particular topic. An SR typically includes a description of the ndings of the collection
of research studies, and it may also include a quantitative pooling of data, called a meta-analysis [2].
High-quality evidence-based guidelines are based on SRs with meta-analyses conducted for the
topics of the guideline, for example, the Canadian [3,4] and German [4e11] guidelines on opioid
therapy in chronic noncancer pain (CNCP). Meta-analyses are increasingly being used to provide evi-
dence of safety and efcacy for new drugs by drug regulatory agencies [12].
The number of SRs with meta-analysis in the area of pain research increased substantially by seven-
to eightfold in the last decade. A PubMed search of the word meta-analysis and chronic pain in the
title yielded 22 articles in the year 2000 and 154 articles in 2014 (see Fig. 1).
Meta-analysis is a powerful but also controversial tool because several conditions are critical to a
sound meta-analysis, and small violations of those conditions can lead to misleading results and
conclusions [1]. Pain therapists and researchers, therefore, should be acquainted with the methods and
pitfalls of SRs with meta-analysis.
The aims of the article are as follows: (a) to introduce the basic concepts of meta-analysis, (b) to
discuss its caveats, and (c) to highlight lessons learned from recent meta-analyses of randomized
controlled trials (RCTs) in chronic pain conditions for study investigators and clinicians.
Meta-analysis is a statistical procedure that integrates the results of several (at least two) inde-
pendent studies considered to be combinable. Meta-analysis should be viewed as an observational
study of the evidence [13]. Meta-analyses can be performed with RCTs as well as with observational
studies. The main objectives of a meta-analysis are as follows [1,14]:
Fig. 1. Hits for Meta-analysis and chronic pain in PubMed from 1987 to 2014.
user W, To
Please cite this article in press as: Ha lle TR, Meta-analyses of pain studies: What we have
learned, Best Practice & Research Clinical Rheumatology (2015), http://dx.doi.org/10.1016/
j.berh.2015.04.021
user, T.R. To
W. Ha lle / Best Practice & Research Clinical Rheumatology xxx (2015) 1e16 3
Most meta-analyses are restricted to the reported study results, that is, of the pooled data, for
example, main pain intensity scores of the study sample. Individual patient data (IPD) meta-analysis
requires that the individual results of each patient are provided by the researchers and/or pharma-
ceutical companies and can be therefore rarely conducted [15].
In most RCTs on chronic pain, drugs are compared to placebo and psychological therapies are
compared to treatment as usual or waiting-list controls. Head-to-head comparisons of active drugs or
of psychological therapies to drugs are rarely conducted leaving uncertainties about which treatment is
preferred. Network meta-analysis is a process in which multiple treatments (that is, three or more) are
being compared using both direct comparisons of interventions within RCTs and indirect comparisons
across trials based on a common comparator [16].
Examples:
a) IPD meta-analysis of RCTs: Although acupuncture is widely used for chronic pain, there remains
considerable controversy as to its value. Vickers et al. determined the effect size of acupuncture for
four chronic pain conditions: back and neck pain, osteoarthritis, chronic headache, and shoulder
pain by an SR to identify RCTs of acupuncture for chronic pain in which allocation concealment was
determined unambiguously to be adequate. IPD meta-analyses were conducted using data from 29
of 31 eligible RCTs, with a total of 17,922 patients analyzed. In the primary analysis, including all
eligible RCTs, acupuncture was superior to both sham and no-acupuncture control for each pain
condition. The difference in effect size was 0.45 (95% condence interval (CI) 0.78, 0.12;
p 0.007) or 0.19 (95% CI 0.39, 0.01; p 0.058) after exclusion of outlying studies showing very
large effects of acupuncture [17].
b) Meta-analyses of observational studies: The importance of self-reported sexual and physical abuse
in patients with bromyalgia syndrome (FMS) remains a matter of debate. An SR identied 18
eligible caseecontrol studies with 13,095 subjects. There were signicant associations between FMS
and self-reported physical abuse in childhood (odds ratio (OR) 2.49 (95% CI 1.81e3.42)) and
adulthood (OR 3.07 (95% CI 1.01e9.39) and sexual abuse in childhood (OR 1.94 (95% CI 1.36e2.75))
[18].
c) Network meta-analysis: There are only very few head-to head comparisons of drug-to-drug or of
drug-to-non-pharmacological treatment available in FMS. One hundred and two (102) trials in
14,982 patients and eight active interventions (tricyclic antidepressants, selective serotonin reup-
take inhibitors (SSRIs), serotonin noradrenaline reuptake inhibitors (SNRIs), the calcium-channel
modulator pregabalin, aerobic exercise, balneotherapy, cognitive behavioral therapy (CBT), and
multicomponent therapy were included. Most of the trials were small and hampered by method-
ological quality, introducing heterogeneity and inconsistency in the network. When restricted to
large trials with 100 patients per group, heterogeneity was low and benets for SNRIs and pre-
gabalin compared with placebo were statistically signicant, but small. For non-pharmacological
interventions, only one large trial of CBT was available. In medium-sized trials with 50 patients
per group, multicomponent therapy showed small to moderate benets over placebo, followed by
aerobic exercise and CBT [19].
user W, To
Please cite this article in press as: Ha lle TR, Meta-analyses of pain studies: What we have
learned, Best Practice & Research Clinical Rheumatology (2015), http://dx.doi.org/10.1016/
j.berh.2015.04.021
4 user, T.R. To
W. Ha lle / Best Practice & Research Clinical Rheumatology xxx (2015) 1e16
Table 1
Potential pitfalls of meta-analysis and countermeasures [1,14,20].
Incomplete literature search Searching grey literature (literature that is not formally published, that is, personal
communications, conferences, abstracts)
Checking databases of clinical trials
Including studies in all languages
Lack of data Contacting study authors
Imputation methods
Mistakes in data extraction At least two independent authors using standardized forms
Quality of data Detailed rating of the quality of studies; focus on high-quality studies or stratied
analysis according to study quality
Evaluation of results Heterogeneity of results: Forest plots, subgroup analyses
Analyses of data: Analyzing data using various methods (e.g., xed- and
random-effects model) and presenting results when some studies are removed
from the analysis
not with other effect sizes. No signicant interaction between subgroup and strength of association
according to the types of controls (e.g. healthy persons), persons with other chronic disease, study
setting (continent and type of study population), sex composition of the FMS samples, type of FMS
diagnosis (American College of Rheumatology (ACR) versus other criteria), and type of assessment
(questionnaire versus interview) was found [18].
There are some important tools available to improve the quality of meta-analyses (see Table 2).
The prior registration of an SR with meta-analysis should prevent the risk of multiple reviews
addressing the same question, reduce publication bias, and provide greater transparency when
updating SRs. Recently, the Preferred Reporting Items for Systematic reviews and Meta-Analyses for
Protocols (PRISMA-P) statement published a 17-item checklist intended to facilitate the preparation
and reporting of a robust protocol for the SR [21]. The Cochrane Collaboration requires the approval of
the protocol of review. The protocol of a projected review undergoes a detailed review. The Cochrane
Collaboration provides a detailed handbook on how to perform SRs and meta-analyses [14]. The
PRISMA 27 checklist items pertain to the content of an SR and meta-analysis, which include the title,
abstract, methods, results, discussion, and funding [22].
Along with the growing demands on the quality of methodology, SRs with meta-analysis developed
with extensive publication, the details of which are difcult to understand for clinicians and layper-
sons. Tools for disseminating knowledge and facilitating understanding by nonexperts have been
developed.
The Cochrane Library is a subscription-based database, and now published by John Wiley & Sons,
Ltd., as part of Wiley Online Library (http://onlinelibrary.wiley.com/). In many countries, including
parts of Canada, the UK, Ireland, the Scandinavian countries, New Zealand, Australia, India, South Af-
rica, and Poland, it has been made available free to all residents by national provision (typically a
government or department of health pays for the license). There are also arrangements for free access
user W, To
Please cite this article in press as: Ha lle TR, Meta-analyses of pain studies: What we have
learned, Best Practice & Research Clinical Rheumatology (2015), http://dx.doi.org/10.1016/
j.berh.2015.04.021
user, T.R. To
W. Ha lle / Best Practice & Research Clinical Rheumatology xxx (2015) 1e16 5
Table 2
Tools to improve the quality of meta-analyses.
in much of Latin America and in low-income countries, typically via HINARI (http://www.who.int/
hinari/en/). All countries have free access to two-page abstracts of all Cochrane Reviews and to short
plain-language summaries of selected articles. Every Cochrane Review starts with a plain-language
summary (for example, see Table 5).
The Cochrane Collaboration has developed a product called Cochrane Clinical Answers (CCAs).
CCAs are intended to be short answers to a clinical question, using evidence from Cochrane Reviews.
They aim to be a readable, digestible entry point to sit between e-textbooks or decision support
applications and full-text Cochrane Reviews. The target audience for CCAs is health-care practitioners
and professionals, and other informed users of health care (e.g., policy makers) [26] (for example, see
Table 6).
Table 3
Checklist for the reader of a systematic review with meta-analysis [20].
user W, To
Please cite this article in press as: Ha lle TR, Meta-analyses of pain studies: What we have
learned, Best Practice & Research Clinical Rheumatology (2015), http://dx.doi.org/10.1016/
j.berh.2015.04.021
6 user, T.R. To
W. Ha lle / Best Practice & Research Clinical Rheumatology xxx (2015) 1e16
Table 4
Core outcomes for reporting in trials and reviews in chronic pain [24].
Pain
At least 50% pain reduction
At least 30% pain reduction
Patient global impression (very much improved)
Function
General
Quality-of-life measure
Adverse events
Withdrawal due to adverse event
Serious adverse event
Deaths
Unpublished studies, hidden data e AllTrials should be accessible for independent researchers
The biggest potential source of type I error (increase of false-positive results) in meta-analysis is
probably publication bias [13]. There is an ongoing tendency of directors of studies and of pharma-
ceutical sponsors of studies not to publish studies with negative results and/or bundling negative
ndings with positive studies to neutralize the results. An SR examined reporting practices for trials of
gabapentin funded by Pzer and Parke-Davis for off-label indications (prophylaxis against migraine
and treatment of bipolar disorders, neuropathic pain, and nociceptive pain), comparing internal
company documents with published reports. The authors identied 20 clinical trials for which internal
documents were available from Pzer and Parke-Davis; of these trials, 12 were reported in publications.
Trials presenting ndings that were not signicant (p 0.05) for the protocol-dened primary
outcome in the internal documents either were not reported in full or were reported with a changed
primary outcome [27].
IPD must be provided to drug agencies such as the US Federal Drug Agency (FDA) and the European
Medicines Agency (EMA) by pharmaceutical companies when applying for approval of a drug. These
data were not available until now for researchers. To investigate the effect of including unpublished
trial outcome data obtained from the FDA in the results of meta-analyses of drug trials, 42 meta-
analyses (41 efcacy outcomes and one harmful outcome) for nine drugs across six drug classes
were reanalyzed. Overall, the addition of unpublished FDA trial data caused 46% (19/41) of the sum-
mary estimates from the meta-analyses to show lower efcacy of the drug, 7% (3/41) to show identical
efcacy, and 46% (19/41) to show greater efcacy. The summary estimate of the single harmful outcome
showed more harm from the drug after inclusion of unpublished FDA trial data [28].
Another constant sorrow of authors of meta-analyses is incomplete reporting of outcomes such as
missing reports of outcomes (outcome reporting bias (ORB)). An SR assessed the impact of ORB in 21
Cochrane Reviews on rheumatoid arthritis by multivariate meta-analysis. The reviews were assessed
for ORB in relation to eight outcomes for rheumatoid arthritis. Impact of ORB was assessed by
comparing estimates from univariate meta-analysis and multivariate models. All reviews contained
missing data on at least one of the eight outcomes. ORB was highly suspected in 247 (22%) of the 1118
evaluable outcomes from 155 assessable trials. Multivariate and univariate results sometimes differed
importantly. The maximum change in treatment effect estimate between the multivariate and uni-
variate meta-analysis approach was found to be 176% for one of the outcomes considered [29].
Some tests are available to screen for publication bias, to calculate missing values [14], and to adjust
for ORB [29]. The best way to overcome these problems is the registration of any clinical trial in a
database before starting the study (e.g., in clinicaltrials.gov) and to give access to all data of a clinical
trial by independent researchers. The AllTrials campaign, launched in January 2013, calls for all clinical
trials to be registered and their results reported. The initial group of six organizations such as British
Medical Journal, Cochrane Collaboration, and Oxford Centre for Evidence Based Medicine has since
been joined by 79,000 people and nearly 500 groups worldwide, including medical associations,
research funders, medicines regulators, pharmaceutical companies, consumer associations, publishers,
and >200 patient groups. Members of the campaign want all clinical trials, past and present, to be
registered, full methods and summary results to be publicly available, and Clinical Study Reports (large
user W, To
Please cite this article in press as: Ha lle TR, Meta-analyses of pain studies: What we have
learned, Best Practice & Research Clinical Rheumatology (2015), http://dx.doi.org/10.1016/
j.berh.2015.04.021
user, T.R. To
W. Ha lle / Best Practice & Research Clinical Rheumatology xxx (2015) 1e16 7
Table 5
Example of a plain-language summary of a Cochrane Review [25].
Plain-language summary
Opioids for the treatment of chronic low-back pain
Review question
We reviewed the evidence about the effect of opioids on pain and function among people with chronic low-back pain (CLBP).
Background
Opioids are pain relievers that act on the central nervous system. People with low-back pain (LBP) use these drugs to relieve
pain. We examined whether the use of opioids for at least 4 weeks was better or worse than other treatments of CLBP.
Study characteristics
We searched for trials, both published and unpublished, up to October 2012. We included 15 trials, which included 5540
participants, and compared opioids against a placebo (fake medication) or other drugs that have been used for LBP. Most
people included in the trials were aged 40e50 years and all reported at least moderate pain across the low-back area. The
trials included a slightly higher proportion of women. Most of the trials followed up the patients for 3 months, and these
trials were supported by the pharmaceutical industry.
Key results
In general, people who received opioids reported more pain relief and had lesser difculty performing their daily activities in
the short term than those who received a placebo. However, there are little data about the benets of opioids based on
objective measures of physical functioning. We have no information from RCTs supporting the efcacy and safety of
opioids used for >4 months. Furthermore, the current literature does not support that opioids are more effective than
other groups of analgesics for LBP such as anti-inammatories or antidepressants. This review partially supports the
effectiveness of several opioids for CLBP relief and function in the short term. However, the effectiveness of prescribing
these medications for long-term use is unknown, and it should take into consideration the potential for serious adverse
effects; complications; and increased risk of misuse, abuse, addiction, overdose, and deaths.
As expected, side effects are more common with opioids but not life threatening with short-term use. Insufcient data
prevented making conclusions about the side-effect prole of opioids versus other type of analgesics (e.g., antidepressants
or anti-inammatories).
Quality of the evidence
The quality of evidence in this review ranged between very low and moderate. The review results should be interpreted
with caution and may not be appropriate in all clinical settings. High-quality randomized trials are needed to address the
long-term (months to years) risks and benets of opioid use in CLBP and their relative effectiveness compared with other
treatments, and to better understand who may benet most from this type of intervention.
documents needed for marketing authorization processes) to be available where they have been
produced [30].
On 1 January 2015, a new EMA policy for the publication of clinical data was introduced. Under this
policy, the Agency proactively publishes the clinical reports submitted as part of marketing authori-
zation applications for human medicines [31].
A very important issue in chronic pain trials is that of outcome [32]. There has been much discussion
over the years about minimally important or clinically important differences in pain trials [33]. This is
interesting from an academic perspective but not clinically relevant for the outcome desired by the pa-
tient. Studies have shown that patients with chronic pain receiving treatment seek/expect the following:
Table 6
Example of Cochrane Clinical Answers [26].
Question: In people with bromyalgia, what are the effects of cognitive behavioral therapies?
Clinical Answer: Low-quality evidence suggested that traditional CBT improved pain, negative mood, and fatigue and
reduced disability, compared with a range of other interventions at both the end of a median 10 weeks of treatment and
after 6 months follow-up in patients with bromyalgia (diagnosed by 1990 ACR criteria). The population was
predominately women with long-standing (>5-year-long) disease. Quality of life was improved at the end of treatment.
There were insufcient data to assess sleep problems or longer-term effects on quality of life. Withdrawals for any reason
during the treatment phase were similar in both groups. Subgroup analyses assessing different types of CBTs (e.g.,
telephone interview) separately tended to be too small to detect clinically meaningful differences even if present, making
it difcult to draw conclusions.
user W, To
Please cite this article in press as: Ha lle TR, Meta-analyses of pain studies: What we have
learned, Best Practice & Research Clinical Rheumatology (2015), http://dx.doi.org/10.1016/
j.berh.2015.04.021
8 user, T.R. To
W. Ha lle / Best Practice & Research Clinical Rheumatology xxx (2015) 1e16
Historically, chronic pain trials have not reported these outcomes, but they tended to report average
pain scores or average changes in pain scores. This is problematical for several reasons. One is that pain
responses in chronic pain trials are not normally distributed but have a bimodal distribution with the
majority of patients having either very good pain relief or very little. Consequently, the average pain score
or change represents a result that few patients experience. Another is that small average pain differences
between active and placebo hide the fact that a substantial minority achieve extremely good levels of pain
relief [32]. Thus, any trial of any design that reports only average results will be of little value and its results
almost impossible to interpret except in the most general terms. Trials in chronic pain should therefore
report responder rates such as the number of patients with a 30% and a 50% pain reduction [32].
Example:
Moore and coworkers calculated the number of patients with improvements in pain intensity from
a baseline of 15%, 30%, 50%, and 70% at 2, 4, 8, and 12 weeks of treatment using data from two
identical 12-week trials comparing etoricoxib 60 mg (N 210), 90 mg (N 212), and placebo (N 217).
After 12 weeks, 65% of patients taking etoricoxib had 15% improvement, 60% had 30% improvement,
45% had 50%, improvement, and 30% had 70% improvement, with placebo rates of approximately
55%, 45%, 30%, and 15%, respectively [34].
Responder criteria for other outcomes of interest, for example, depression (5-point decrease in the
Beck Depression Inventory) [33] and health-related quality of life (e.g., 14% decrease in the Fibro-
myalgia Impact Questionnaire total score) [35], are available. In addition, combined responder criteria
such as a 50% decrease in pain and a self-rating of much or very much improved on Patient Global
Impression of Change have been used in FM trials [36].
The prospect is that the imperatives of the individual will vanquish a tyranny of averages. [37].
In crossover trials, patients receive a sequence of alternative therapies; patients act as their own
controls to eliminate confounding factors. The sequence can include placebo or a sequence of active
drugs, including combinations. They are used to answer questions about whether drugs work or
whether one drug may be better than another. A major problem with crossover trials is that they tend
to be short (usually 2e4 weeks), which limits their applicability. There are issues about the time
needed (if any) for washout between treatment periods. Withdrawal rates tend to be high with
multiple crossovers, which can be tedious for patients, and that limits the applicability or use of paired
data between treatments for the same patients. Poor reporting of study results limits their use in meta-
analysis, possibly with some biases [32]. However, even small crossover trials can have rewarding
results. They can show that patients with chronic pain who react poorly with one drug may do well
with another closely related drug, as with amitriptyline and nortriptyline in postherpetic neuralgia:
ve of 31 patients had mild or moderate pain with amitriptyline but moderate to severe pain with
nortriptyline while four patients had good pain relief with nortriptyline but not with amitriptyline [38].
Enriched enrollment randomized withdrawal (EERW) designs are helpful in determining whether
an intervention works [39], especially if proof of concept of an intervention in a particular disease
entity is questioned. In an EERW study design, the rst phase of the study is carried out without
blinding of patients and study physicians. In the second, double-blind phase, only those patients
showing a response e a response being dened by predened criteria, for example, 50% pain reduction
e and not declining further use of the preparation due to adverse effects are accepted. One proportion
of the responders continue to receive the study drug; another proportion receive placebo. The RCT is
thus conducted exclusively in responders. In light of this selection, the applicability of the results of
such studies to the total population of patients with the disease in question has to be viewed critically.
Nonetheless, the EERW approach is considered to represent an appropriate design for studies in
chronic pain. For the assessment of the efcacy of an analgesic substance in the context of longer-term
use, this selection process in fact mirrors the situation in clinical practice (ecological validity) [39,40].
user W, To
Please cite this article in press as: Ha lle TR, Meta-analyses of pain studies: What we have
learned, Best Practice & Research Clinical Rheumatology (2015), http://dx.doi.org/10.1016/
j.berh.2015.04.021
user, T.R. To
W. Ha lle / Best Practice & Research Clinical Rheumatology xxx (2015) 1e16 9
One example is the FREEDOM trial of pregabalin in FM. The trial included a 6-week open-label (OL)
pregabalin treatment period followed by a 26-week double-blind treatment with placebo or pre-
gabalin. To be randomized, patients had to have a 50% decrease in pain and a self-rating of much or
very much improved on the Patient Global Impression of Change. The outcome was loss of thera-
peutic response, dened as 30% reduction in pain from baseline or worsening of FM. For pregabalin in
FM, the FREEDOM study rejected about 50% of patients before randomization (566 patients random-
ized), but 68% of those continuing on pregabalin continued to have effective analgesia over 6 months
compared with 39% on placebo [36].
Differences in design necessitate that studies with a parallel or crossover design and studies with an
EERW design be analyzed separately in meta-analyses [6,9,41].
Nearly all studies report only the data at baseline and at the end of treatment. Again, all data of a
study e here outcomes of all assessment points e are of interest. Analyses of all assessment points
demonstrated that if a drug offers pain relief, it occurs quickly, about 8 days with pregabalin in post-
zoster neuralgia (PZN) and about 2e4 weeks with nonsteroidal anti-inammatory drugs (NSAIDs) in
musculoskeletal pain and pregabalin in FM [32,37].
In chronic pain, studies with <100 participants show consistently larger treatment effects than
those with >100 patients [42]. Small sample size in single studies and SRs generates considerable
uncertainty on the magnitude of the treatment effect [43].
Large sample sizes are also necessary to have a robust estimate of the risk of a serious adverse event
(AE). An elementary principle in the design of studies is that the number of participants needed to
detect an increased rate of an adverse event depends on how condent one wants to be of identifying a
risk of a given magnitude (i.e., the desired statistical power). For example, with 1000 participants, there
is a >80% chance of detecting a true doubling in the rate of an AE from 5% to 10%, but there is far less
condence (only a 17% chance) in detecting a doubling from 1% to 2%. At least 50,000 participants are
needed to achieve 80% power of detecting a doubling of a 0.1% event rate. This amount of data can only
be gathered by post-marketing spontaneous case reports, computerized claims or medical record
databases, and data collected in prospective post-marketing studies [44].
The magnitude of the effect size of a given treatment depends on the type of control group. In
studies with drugs or physical therapies (e.g., acupuncture and magnetic brain stimulation), the control
group receives a placebo drug or sham intervention. Other types of control intervention use a standard
therapy (e.g., amitriptyline in FM) or treatment as usual, the details of which are not specied in most
studies. Psychological studies use waiting-list controls or no therapy for controls as well. In contrast to
a placebo treatment, patients do not receive the same time and amount of attention. In an SR on CBTs in
FMS with a total of 1073 patients in treatment groups and 958 patients in control groups included in
the analysis, the effect sizes of CBTs on pain and on disability at long-term follow-up compared to
treatment as usual or waiting-list control were small and statistically signicant. However, the effect
sizes of CBTs on pain, negative mood and disability at the end of treatment and at long-term follow-up
compared to active controls were not statistically signicant [45].
An SR with acupuncture trials involving patients with headache and migraine; osteoarthritis; and
back, neck, and shoulder pain included 29 trials, 20 involving sham controls (n 5230) and 18 non-sham
controls (n 14,597). Acupuncture was signicantly superior to all categories of the control group. For
trials that used penetrating needles for sham control, acupuncture had smaller effect sizes than for trials
with non-penetrating sham or sham control without needles. The difference in effect size was 0.45 (95%
CI 0.78, 0.12; p 0.007) or 0.19 (95% CI 0.39, 0.01; p 0.058) after exclusion of outlying studies
user W, To
Please cite this article in press as: Ha lle TR, Meta-analyses of pain studies: What we have
learned, Best Practice & Research Clinical Rheumatology (2015), http://dx.doi.org/10.1016/
j.berh.2015.04.021
10 user, T.R. To
W. Ha lle / Best Practice & Research Clinical Rheumatology xxx (2015) 1e16
showing very large effects of acupuncture. In trials with non-sham controls, larger effect sizes were
associated with acupuncture versus nonspecied routine care than versus protocol-guided care [46].
Imputation methods for patient withdrawal can bias efcacy outcomes in chronic pain trials using
responder analyses. Patient withdrawal from randomized trials is common in chronic pain trials, and it
is more common for the experimental drug than for placebo. Different analysis strategies can be used
to deal with missing data. Simply excluding withdrawals and analyzing only for completers can favor
experimental treatment. Estimating missing data by generating values for missing data points,
extrapolating the last pain observation made, and carrying it forward to the end of the trial (last
observation carried forward, LOCF), or using baseline pain intensity measurement and zero pain relief
as end-of-trial outcomes (baseline observation carried forward, BOCF) [32]. The two imputation
methods were compared for etoricoxib and duloxetine in chronic low-back pain, for pregabalin in
painful diabetic neuropathy and postherpetic neuralgia, and for milnacipran and pregabalin in FMS.
Number needed to treat (NNTs) calculated by dening withdrawal as nonresponse were numerically
greater (worse) than for LOCF imputation in 16 of 19 comparisons, and they were signicantly greater
in two of 19. Overestimation with LOCF imputation is driven by greater adverse-event discontinuations
with active treatment than with placebo [32,47].
Pitfalls in meta-analyses on AEs reported from clinical trials arise from the fact that AEs are often not
the primary end points in clinical trials. The modes of the assessment of AEs (spontaneous reports of
the study participants, open or leading questions by the investigator, and prompted or unprompted
standardized symptom checklist) are rarely reported. In addition, inconsistent event denitions,
various levels of effort in reporting unexpected AEs, and inappropriate use of statistical testing have
been found [48]. To create valid and comparable side-effect proles of drugs and thus ensure the safe
use of medicine, a common European standard for the structured assessment of adverse reactions (AR)
in clinical research, which does not exist yet, is required [49]. An SR demonstrated that most studies on
psychological therapies in chronic pain did not assess AEs at all [50]. There is increasing recognition
that psychological treatments have the potential for negative outcomes for some patients [51]. Stan-
dards, in which potential negative outcomes should be routinely monitored in nondrug trials, by which
methods, still have to be dened by the research community.
There are multiple risks of bias in studies that are included in an SR. Sophisticated methods have
been designed to assess the risk bias of studies included in an SR such as the Cochrane risk of bias tool
[14] (see Table 7).
The conclusions of a meta-analysis depend strongly on the quality of the studies identied to es-
timate the pooled effect. A meta-analysis of low-quality studies cannot produce high quality and valid
ndings (garbage in/out).
An approach to grade the quality of evidence of an SR with meta-analysis and the strength of
recommendations has been developed by the Grading of Recommendations Assessment, Development
and Evaluation (short GRADE) Working Group. The quality of evidence can be classied as follows [52]:
High (): We are very condent that the true effect lies close to that of the estimate of the
effect.
Moderate (): We are moderately condent in the effect estimate; the true effect is likely to be
close to the estimate of the effect, but there is a possibility that it is substantially different.
Low (): Our condence in the effect estimate is limited; the true effect may be substantially
different from the estimate of the effect.
user W, To
Please cite this article in press as: Ha lle TR, Meta-analyses of pain studies: What we have
learned, Best Practice & Research Clinical Rheumatology (2015), http://dx.doi.org/10.1016/
j.berh.2015.04.021
user, T.R. To
W. Ha lle / Best Practice & Research Clinical Rheumatology xxx (2015) 1e16 11
Table 7
Types of bias in trials in chronic pain [14,32].
Selection bias Sequence generation: needs to be truly Nonrandomized, or improperly randomized trials
random, using random number tables overestimate the effects of treatment
or computer-generated randomization
Selection bias Allocation concealment: Ensures that Where allocation is not effectively concealed,
those conducting the study do not treatment effects tend to be higher.
know to which group participants are
assigned. Not the same as blinding as it
happens before treatment commences.
Performance bias Blinding of participants, and personnel: Unblinded, or open, trials overestimate the effects
Measures taken to ensure that of treatment
participants and personnel do not know
which intervention a participant
received.
Detection bias Blinding of outcome assessors Outcome assessor involved in treatment of the
patients might be biased in analyzing the data
Attrition bias Incomplete outcome data. Withdrawals There is a tendency for studies with high
from the study should be described withdrawals to overestimate effects, or with
with reasons together with any incomplete reporting to report only those outcomes
assumptions made in the analyses. with signicant benets.
Selective reporting bias Study protocol should be available and Post hoc modications of primary end points by
all of the study's prespecied primary study investigators to achieve statistically
and secondary end points that are of signicant results
interest to the review have been
reported in the prespecied way
Duration bias In chronic pain, studies should ideally Shorter durations, especially, 4 weeks, overestimate
be at least 12 weeks long effectiveness compared with longer trials.
Imputation bias Dealing with data when patients have As many as 30e60% of patients withdraw in chronic
withdrawn from treatment. This is pain trials; LOCF produces major overestimation of
typically dealt with by carrying results treatment effect when adverse event withdrawals
from the last observation forward to the are high.
end of the trial (LOCF).
Sample size bias Only a minority of patients benet from Historically, small trials in chronic pain have been
chronic therapy, so larger trials are shown to overestimate treatment effects.
needed to overcome the random play of
chance. This usually means having 100
e200 patients per group as a minimum.
Very low (): We have very little condence in the effect estimate; the true effect is likely to be
substantially different from the estimate of effect; any estimate of effect is very uncertain.
RCTs conducted for the approval of a drug for a given disease require strict inclusion and exclusion
criteria. Patients with major medical diseases (e.g., heart, kidney, and liver insufciency) and major mental
disorders (e.g., major depression and substance abuse) were excluded by the majority of studies with
opioids in CNCP [5e11]. The increased mortality associated with opioid therapy in clinical practice might be
explained by treatment of patients with major medical diseases that were excluded in clinical studies [7].
A p-value is the probability of obtaining the observed effect (or larger) under a null hypothesis, for
example, no differences in the effect of intervention between studies. It has been common practice to
user W, To
Please cite this article in press as: Ha lle TR, Meta-analyses of pain studies: What we have
learned, Best Practice & Research Clinical Rheumatology (2015), http://dx.doi.org/10.1016/
j.berh.2015.04.021
12 user, T.R. To
W. Ha lle / Best Practice & Research Clinical Rheumatology xxx (2015) 1e16
interpret a p-value by examining whether it is smaller than particular threshold values. In particular, p-
values < 0.05 are often reported as statistically signicant, and they are interpreted as being small
enough to justify rejection of the null hypothesis. p-Values are commonly misinterpreted in two ways
[14]. First, a p-value >0.05 may be misinterpreted as evidence that the intervention has no effect.
There is an important difference between this statement and the correct interpretation that there is
not strong evidence that the intervention has an effect. The absence of evidence is not the evidence of
absence [53]. The second misinterpretation is to assume that a result with a small p-value for the
summary effect estimate implies that an intervention has an important benet. Such a misinterpre-
tation is more likely to occur in large industry-sponsored studies or in meta-analyses that accumulate
data over dozens of studies and thousands of participants. Therefore, the magnitude and the con-
dence interval of the effect size, and not the magnitude of the signicant p-value, are of interest.
The terms number needed to treat and number needed to harm can be misunderstood
The number needed to treat (NNT) is dened as the expected number of people who need to receive
the experimental rather than the comparator intervention for one additional person to either incur or
avoid an event in a given time frame. Thus, for example, an NNT of 10 can be interpreted as it is expected
that one additional (or less) person will incur an event for every 10 participants receiving the experimental
intervention rather than control over a given time frame. It is important to be clear that as the NNT is
derived from the risk difference, it is still a comparative measure of effect (experimental versus a certain
control) and not a general property of a single intervention. An NNT of 10 does not mean that (only) 10% of
the treatment group will meet the predened outcome (e.g., 50% pain reduction) [32]. The preferred
alternative is to use phrases such as number needed to treat for an additional benecial outcome (NNTB)
and number needed to treat for an additional harmful outcome (NNTH) to indicate direction of effect
[14]. Because the placebo response rates in drug trials for chronic pain are high [54], the potential effect of a
drug in clinical practice might be underestimated. The effect of a drug in clinical practice is due to the
specic effect of the drug and nonspecic (placebo) effects such as attention by the physician and positive
treatment expectations by the physician and patient [32,54]. To estimate the potential of a treatment a
clinical care (without a control group), the frequency of responders in the treatment and in the control
group and the corresponding NNTB and NNTH should be presented (see Table 8).
The placebo and nocebo response rates in drug trials are high
The placebo response is dened as the reduction in a symptom as a result of factors related to a
patient's perception of the placebo intervention. The placebo response is determined by the placebo
effect (psychological factors such as expectation of benet, classical conditioning, verbal suggestions,
and behaviors manifested by health-care providers) as well as by the natural course of disease and by
the study design (e.g., regression to the mean and uncontrolled parallel interventions) [55].
Table 8
Dichotomous outcomes in randomized controlled trials with opioids in chronic osteoarthritis pain [9].
Number Outcome Opioida versus Outcomes (95% condence interval) NNTB or NNTH
studies/ placebo (%) (95% condence
patients interval)
2/2709 At least 50% pain 25.1 versus 25.7 RD 0.01 (0.07, 0.06), p 0.82, I [2] 75% Not calculated
reduction
3/2251 Much or very much 50.0 versus 37.8 RD 0.13 (0.05, 0.21), p 0.002, I [2] 74% 8 (6e12)
improved
14/6457 Dropout due to 25.6 versus 7.0 RD 0.17 (0.14, 0.21), p < 0.00001, I [2] 77% 5 (4e6)
adverse events
11/5520 Serious adverse 2.4 versus 1.8 RD 0.00 (0.00, 0.01), p 0.37, I [2] 2%) Not calculated
events
RD risk difference.
a
Buprenorphine, codeine, fentanyl, hydromorphon, morphine, oxycodone, oxymorphon, tapentadol, and tramadol.
user W, To
Please cite this article in press as: Ha lle TR, Meta-analyses of pain studies: What we have
learned, Best Practice & Research Clinical Rheumatology (2015), http://dx.doi.org/10.1016/
j.berh.2015.04.021
user, T.R. To
W. Ha lle / Best Practice & Research Clinical Rheumatology xxx (2015) 1e16 13
An SR included 18 studies with 3546 FMS patients on placebo. The pooled estimate of a 50% pain
reduction by placebo was 18.6% (95% CI 17.4e19.9%). The verum drug was 26.9% (95% CI 23.5e30.6%).
The placebo response accounted for 69% of the response rate in the verum group [57].
The reason for the high response rates with placebo is uncertain. This could partly be regression to
the mean with pain reduction after a are, partly a consequence of using newspaper and other ad-
vertisements to recruit participants rather than clinic patients, and partly the acknowledged benets of
being in a trial and receiving additional attention. The fact of high placebo response rates and modest
benet with active treatment is that classic trials will lack the sensitivity to determine whether or not a
drug works. Attempts to minimize placebo responses do not yet seem to have any great success [32].
The nocebo response is dened as the deterioration of symptoms as a result of factors related to a
patient's perception of the placebo intervention. The nocebo response is determined by the nocebo
effect (psychological factors such as expectation of harm, classical conditioning, verbal suggestions,
and behaviors manifested by health-care providers) as well as by the natural course of disease (e.g.,
spontaneous symptom worsening), other concurrent diseases, and the study design (e.g., uncontrolled
parallel interventions such as AEs by rescue medication) [56].
Nocebo response rates are high in drug trials for chronic pain as well. An SR included 58 FMS (62
painful diabetic polyneuropathy (DPN)) trials with a total of 5065 (5095) patients in placebo groups.
The pooled estimate of the event rate dropout rate due to AEs was 9.6 (95% CI: 8.6e10.7) in the placebo
groups and 16.3 (95% CI: 14.1e31.2) in the true-drug groups of FMS trials, and it was 5.8 (95% CI:
5.1e6.6) in the placebo groups and 13.2 (95% CI: 10.7e16.2) in the true-drug groups of DPN trials.
Nocebo effects accounted for 72.0% (44.9%) of the dropouts in true-drug groups in FMS (DPN) [58].
Is placebo powerful?
There is a long controversy whether placebo interventions can produce clinically relevant symptom
relief. The determination of the placebo effect requires comparison with a no-treatment control group. A
Cochrane Review compared placebo interventions with no treatment in three-armed trials (no treat-
ment, placebo, and treatment). Sixty trials with 4154 patients evaluated the effect on pain based on
continuous outcomes, for example, pain intensity measured on a 100-mm visual analog scale. The effect
of placebo interventions on pain on a 100-mm scale was 16 mm based on the four German acupuncture
trials, and 3 mm based on the other trials. In spite of placebo effects reaching statistical signicance in the
updated review, the authors provided skeptical conclusions regarding the strength of placebo effects [59].
A recent SR with meta-analysis used similar studies to the Cochrane Review. The authors added an
analysis of treatment and placebo differences within the same trials. They found that placebos and
treatments often have similar effect sizes. The standardized mean difference (SMD) of the placebo
effect was 0.59 (0.90 to 0.27). The SMD of the treatment effect was 1.47 (2.34 to 0.51). The
difference of differences was 0.36 (1.36 to 0.64; p 0.48). They concluded that placebos with
comparatively powerful effects can benet patients either alone or as part of a therapeutic regime and
could be considered in chronic pain where placebo effects are similar in magnitude to treatment effects
and adverse effects of true drugs are high (e.g., NSAIDs and opioids) [60].
The power of placebo interventions was demonstrated in an SR with a meta-analysis of 198 trials
with 193 placebo groups (16,364 patients) and 14 untreated control groups (1167 patients) in osteo-
arthritis. These included a range of therapies (non-pharmacological, pharmacological, and surgical
treatments). Placebo was effective in relieving pain (SMD 0.51, 95% CI 0.46e0.55 for the placebo group
and 0.03, 95% CI e0.13 to 0.18 for untreated control). Placebo was also effective in improving function
and stiffness. The pain-relieving effect increased when the active treatment effect (b 0.38, p < 0.001),
baseline pain (b 0.006, p 0.014), and sample size (b 0.001, p 0.004) increased, and when
placebo was given through injections/needles (b 0.144, p 0.020) [61].
The Initiative on Methods, Measurement, and Pain Assessment in Clinical Trials (IMMPACT) has
developed consensus reviews and recommendations for improving the design, execution, and inter-
pretation of clinical trials of treatments for pain [62].
user W, To
Please cite this article in press as: Ha lle TR, Meta-analyses of pain studies: What we have
learned, Best Practice & Research Clinical Rheumatology (2015), http://dx.doi.org/10.1016/
j.berh.2015.04.021
14 user, T.R. To
W. Ha lle / Best Practice & Research Clinical Rheumatology xxx (2015) 1e16
a. If reading an SR, clinicians should rst consider the quality of evidence that should be reported in
the abstract. It might be a waste of time to read an SR with a meta-analysis in detail if there is low-
quality or even very low-quality evidence. The implications for practice of Cochrane Reviews offer a
concise summary. These implications have been derived from predened standards and have un-
dergone an extensive review process.
b. In contrast to clinical trialists, clinicians should strive to maximize the placebo effect. The essential
components of a healing therapeutic context (placebo) are as follows [63]:
e Relationship to health-care professional
e Support by health-care professional
e Empathy/friendliness of health-care professional
e Expectation of patients and health-care professional that therapy will help
The deliberate use of psychological strategies underlying the placebo response such as promoting
positive treatment expectations and establishing a positive therapeutic relationship, an authentic and
empathic communication style, and regular health-care contact can likely bolster the positive effects of
any treatment [64].
Both clinical trialists and clinical practice should try to reduce nocebo phenomena. Providing
adequate information regarding disease, diagnoses, treatments, and adverse effects; openly discussing
previous treatment experiences; exploring potential unrealistic fears; and ensuring regular patient
contact may attenuate the nocebo response [64,65].
Practice points
Meta-analysis can be a powerful tool to combine results from studies with similar design and
patient populations that are too small or underpowered individually to demonstrate a sta-
tistically significant association.
The major threats to meta-analysis are publication bias (no publication of studies with
negative results) and low quality of evidence due to methodology limitations of included
studies such as trials with small sample sizes and of short duration, the use of the last-
observation-carried-forward imputation method, and the reports of average pain scores
instead of responder analyses.
Checklists for the reader as well as tools to judge the quality of an SR with meta-analysis are
available.
Winfried Ha user is member of the Musculoskeletal Group and of the Pain, Palliative and Sup-
portive Care Review Group of the Cochrane Collaboration. He is associate editor of Cochrane Clinical
Answers. He has conducted systematic reviews and meta-analyses on behalf of the German Inter-
disciplinary Association for Pain Therapy for the German guideline on bromyalgia syndrome and of
the German Pain Society for the German guideline on long-term opioid therapy in chronic noncancer
pain. He has received honoraria for educational lectures by Abbott, Janssen-Cilag, MSD Sharp &
Dohme, and Pzer between 2011 and 2014. Thomas R. To lle was involved. He was involved in sys-
tematic reviews and meta-analyses and development of guidelines for FMS and treatment with
opioids in noncancer pain for the German chapter of International Association for the Study of Pain.
He has received honoraria for advisory boards and/or educational lectures from Abbott, Allergan,
Astellas, Esteve, Janssen-Cilag, Eli Lilly, Boehringer, Grnenthal, Mundipharma, and Pzer between
2011 and 2014.
user W, To
Please cite this article in press as: Ha lle TR, Meta-analyses of pain studies: What we have
learned, Best Practice & Research Clinical Rheumatology (2015), http://dx.doi.org/10.1016/
j.berh.2015.04.021
user, T.R. To
W. Ha lle / Best Practice & Research Clinical Rheumatology xxx (2015) 1e16 15
Acknowledgment
The authors thank Dr. Sylvia Walker (Melbourne, Australia) for comments on and edits of the
manuscript.
References
[1] Walker E, Hernandez AV, Kattan MW. Meta-analysis: its strengths and limitations. Cleve Clin J Med 2008;75:431e9.
[2] Agency for Healthcare Research and Quality. Effective health care program. Glossary of terms. http://effectivehealthcare.
ahrq.gov/index.cfm/glossary-of-terms/?pageactionshowterm&termid70 [accessed 26.01.15].
[3] Furlan AD, Reardon R, Weppler C, National Opioid Use Guideline Group. Opioids for chronic noncancer pain: a new
Canadian practice guideline. CMAJ 2010;182:923e30.
[4] Furlan AD, Sandoval JA, Mailis-Gagnon A, Tunks E. Opioids for chronic noncancer pain: a meta-analysis of effectiveness
and side effects. CMAJ 2006;174:1589e99.
[5] Hauser W, Bock F, Engeser P, et al. Recommendations of the updated LONTS guidelines: long-term opioid therapy for
chronic noncancer pain. Schmerz 2015;29:109e30.
[6] Petzke F, Welsch P, Klose P, et al. Opioids in chronic low back pain: a systematic review and meta-analysis of efcacy,
tolerability and safety in randomized placebo-controlled studies of at least 4 weeks duration. Schmerz 2015;29:60e72.
[7] Hauser W, Bernardy K, Maier C. Long-term opioid therapy in chronic noncancer pain: a systematic review and meta-
analysis of efcacy, tolerability and safety in open-label extension trials with study duration of at least 26 weeks.
Schmerz 2015;29:96e108.
[8] Sommer C, Welsch P, Klose P, et al. Opioids in chronic neuropathic pain: a systematic review and meta-analysis of efcacy,
tolerability and safety in randomized placebo-controlled studies of at least 4 weeks duration. Schmerz 2015;29:35e46.
[9] Schaefert R, Welsch P, Klose P, et al. Opioids in chronic osteoarthritis pain: a systematic review and meta-analysis of
efcacy, tolerability and safety in randomized placebo-controlled studies of at least 4 weeks duration. Schmerz 2015;29:
47e59.
[10] Welsch P, Sommer C, Schiltenwolf M, Ha user W. Opioids in chronic noncancer pain-are opioids superior to nonopioid
analgesics?: a systematic review and meta-analysis of efcacy, tolerability and safety in randomized head-to-head
comparisons of opioids versus nonopioid analgesics of at least four week's duration. Schmerz 2015;29:85e95.
[11] Lauche R, Klose P, Radbruch L, et al. Opioids in chronic noncancer pain-are opioids different?: a systematic review and
meta-analysis of efcacy, tolerability and safety in randomized head-to-head comparisons of opioids of at least four
week's duration. Schmerz 2015;29:73e84.
[12] US Food and Drug Administration. Enhancing regulatory science e methodologies for meta-analysis. www.fda.gov/
ForIndustry/UserFees/PrescriptionDrugUserFee/ucm360080.htm [accessed 15.02.15].
[13] Egger M, Smith GD. Meta-analysis. Potentials and promise. BMJ 1997;315:1371e4.
*[14] Higgins JPT, Green S, editors. Cochrane handbook for systematic reviews of interventions version 5.1.0 [updated March
2011]. The Cochrane Collaboration; 2011. Available from, www.cochrane-handbook.org.
*[15] Riley RD, Lambert PC, Abo-Zaid G. Meta-analysis of individual participant data: rationale, conduct, and reporting. BMJ
2010;340:c221.
[16] Li T, Puhan MA, Vedula SS, et al., Ad Hoc Network Meta-analysis Methods Meeting Working Group. Network meta-
analysis-highly attractive but more methodological research is needed. BMC Med 2011;9:79.
[17] Vickers AJ, Cronin AM, Maschino AC, et al., Acupuncture Trialists' Collaboration. Acupuncture for chronic pain: individual
patient data meta-analysis. Arch Intern Med 2012;22(172):1444e53.
[18] Hauser W, Kosseva M, ceyler N, et al. Emotional, physical, and sexual abuse in bromyalgia syndrome: a systematic
review with meta-analysis. Arthritis Care Res Hob 2011;63:808e20.
[19] Nesch E, H auser W, Bernardy K, et al. Comparative efcacy of pharmacological and non-pharmacological interventions
in bromyalgia syndrome: network meta-analysis. Ann Rheum Dis 2013;72:955e62.
*[20] Russo MW. How to review a meta-analysis. Gastroenterol Hepatol (N Y) 2007;3:637e42.
[21] Moher D, Shamseer L, Clarke M, et al., PRISMA-P Group. Preferred reporting items for systematic review and meta-
analysis protocols (PRISMA-P) 2015 statement. Syst Rev 2015;4:1.
*[22] Moher D, Liberati A, Tetzlaff J, Altman DG, The PRISMA Group. Preferred reporting items for systematic reviews and meta-
analyses: the PRISMA statement. Ann Intern Med 2009;151:264e9.
*[23] Shea BJ, Grimshaw JM, Wells GA, et al. Development of AMSTAR: a measurement tool to assess the methodological
quality of systematic reviews. BMC Med Res Methodol 2007;7:10.
*[24] Andrew Moore R, Eccleston C, Derry S, et al., ACTINPAIN Writing Group of the IASP Special Interest Group on Systematic
Reviews in Pain Relief, Cochrane Pain, Palliative and Supportive Care Systematic Review Group Editors. Evidence in
chronic pain e establishing best practice in the reporting of systematic reviews. Pain 2010;15:386e9.
[25] Chaparro LE, Furlan AD, Deshpande A, et al. Opioids compared to placebo or other treatments for chronic low-back pain.
Cochrane Database Syst Rev 2013;8:CD004959.
[26] Cochrane Clinical Answers, Simone Appenzeller (on behalf of Cochrane Clinical Answers Editors). In people with -
bromyalgia, what are the effects of cognitive behavioral therapies? 2012. http://dx.doi.org/10.1002/cca.423.
[27] Vedula SS, Bero L, Scherer RW, Dickersin K. Outcome reporting in industry-sponsored trials of gabapentin for off-label
use. N Engl J Med 2009;361:1963e71.
[28] Hart B, Lundh A, Bero L. Effect of reporting bias on meta-analyses of drug trials: reanalysis of meta-analyses. BMJ 2012;
344:d7202.
[29] Frosi G, Riley RD, Williamson PR, Kirkham JJ. Multivariate meta-analysis helps examine the impact of outcome reporting
bias in Cochrane rheumatoid arthritis reviews. J Clin Epidemiol 2015;68:542e50.
[30] AllTrials. www.alltrials.net/nd-out-more/about-alltrials/ [accessed 02.01.15].
user W, To
Please cite this article in press as: Ha lle TR, Meta-analyses of pain studies: What we have
learned, Best Practice & Research Clinical Rheumatology (2015), http://dx.doi.org/10.1016/
j.berh.2015.04.021
16 user, T.R. To
W. Ha lle / Best Practice & Research Clinical Rheumatology xxx (2015) 1e16
user W, To
Please cite this article in press as: Ha lle TR, Meta-analyses of pain studies: What we have
learned, Best Practice & Research Clinical Rheumatology (2015), http://dx.doi.org/10.1016/
j.berh.2015.04.021