Vous êtes sur la page 1sur 28

Common pitfalls in

conducting
clinical trials
K u a n - F u C h e n
V S t a l k - E M C G M H
J a n u a ry 8 , 2 0 1 1
Study design of
clinical research
The Evidence Pyramid
Randomized
Controlled Double

Meta - analysis
Randomized Controlled
Studies Blind Studies
Cohort studies

Case Control Studies

Case series/ Reports Meta-analysis


Forest plot
Ideas, Editorials, Opinions

Animal research
In vitro (test tube) research
Hierarchy of evidence that arranges study designs by their susceptbility to bias.
Scenario

HRT trial
Hormone replacement therapy

Multiple observational
epidemiological study show benefit

Three clinical trials show even


harmful
Treatment Outcome

Attributes/
Randomization
confounders blocks this
Scenario
NASCIS trial
National Acute Spinal Cord Injury Studies

Methyprednisolone Rx in acute spinal cord injury

CAST trial
CAST:
Surrogate outcome

Cardiac arrhythmia suppression trial

Treat PVC or not?

CONVINCE trial
Verapamil vs. standard
Controlled Onset Verapamil Investigation of Cardiovascular Endpoints
Turn out more harmful
Stopped-> DSMB issue
Stopped prematurely for commercial reason
Objectives-
Overall pitfalls

Design
Analysis
Presentation
Objectives-
Design

Randomization issue
Ethical issue
Blinding/masking
Phase of trials
Measurement issue
Sampling issue
Power issue
Objectives-
Analysis

Adherence issue

Multiple comparison

Statistical error
Objectives-
Presentation

Inappropriate or poor reporting of results

Wrong or poor interpretation of results


Design-
Ethical
Fail to recognize roles of Researcher vs.
Physician Research vs. practice
Peter
Off label use

Risk/benefit equipoise Employee: Hopkins Asthma

IND
BioBank
Employee/vulnerable population

Responsibility of investigators

Collaboration with pharmaceuticals

Fail to understand the related regulation


Design-
Ethical CON: verapamil

CAST: non-inferiority
Fail to form DSMB strong pressure: DSMB:
0.5 one tail

Early termination/extended recruitment

What if Placebo not ethical? Ho Ha

67% reduction
Equivalence trial
0% 20 %
Active control
Ho Ha

Effectiveness issue 50% reduction

CONVINCE trial 0%
!" 15 %

Ho Ha
Non-inferiority trial
33% reduction

CAST trial
0% 10 %

VAL/HCTZ (Valsartin/Hydrochlorthiazide) vs. Amlodipine


p1 ! p2
Difference between
types of trials
Alternative RCT
Null Hypothesis Hypothesis
Type

Traditional New = old New ≠ old

Equivalence New < old + δ New ≥ old + δ

Non-inferiority New ⊀ old New = old


Underutilized design
Factorial trial

Non-inferiority trial

Cross-over (self-control) trial


Crossover: order,
sequence, LDA, AB/BA,
Community trial repeatable endpoints,
response quickly, stable
disease, no carryover

Natural experiment

Comprehensive cohort trial


Design-
Phase
Fail to recognize phase of trial

NASCIS

Phases:

Early development

Translational, dose-finding

Safety and activity

Comparative trials

Fail to perform Pilot/feasibility study


I ND A
Investigational
New Drug
Application
Design-
Sampling
Fail to report inclusion/exclusion criteria

Fail to obtain equipoise between:

Homogeneity

Generalizability

Modified Zelen’s design Pros:


Include more pt

Cons:
Lack of blinding

Double randomized consent trial Power issue


Design-
Randomization
Method of randomization not clearly stated

Simple, Restricted, or Adaptive, which & why?


Types:
How to have good randomization? Simple:
unpredictable
*imbalances
Restricted (blocking stratification):
Independent group Balance, time related (stopped)
*prediction, unmasked
-> >1 block size
Adaptive (minimization, play the winner)

Reproducible, unpredictable

Adherence, blinding

Verify, audit, periodical check


Design-
Blinding/masking
Hard to mask for device trial

Patient bias (placebo/sham effect)

Investigator bias

Evaluator bias

PROBE design (Prospective randomized


open blinded end-point)
Design-
Measurement
Instrument utilization

Even VAS not optimal for pain

Objective is better: heart rate

Suboptimal measurement

Surrogate outcome/exposure

CAST trial

Duration of follow up
Design-
Power
Fail to estimation recruitment feasibility

Fail to report withdrawals

Fail to calculation for interim analysis

Fail to use efficient trial

Non-inferiority trial

CAST for one-sided issue


Analysis-
Adherence
As-treated analysis

Compare who actually received treatment vs. control

Ignore randomization

Per-protocol analysis

Compare who comply with assigned treatment

Compare compliers in treatment group with full control group

Intention-to-treat analysis

Standard estimate

Ignore compliance just use randomization

Instrumental variable/complier average causal effect (IV/CACE)


Analysis-
Multiplicity
Failure to include a multiple-comparison
correction

NASCIS

Inflation of Type I error

Post-hoc subgroup analysis


Analysis-
statistical error
Typical errors with Student’s t-test

Failure to prove test assumptions

Use of an unpaired t-test for paired data or


vice versa

Unequal sample sizes for paired t-test

Improper multiple pair-wise comparisons


Analysis-
statistical error
Typical errors with Chi-square tests

No Yates-continuity correction reported if


small numbers

Use of chi-square when expected numbers


in a cell <5

No explicit statement of tested Null-


Hypotheses
Presentation-
reporting
Report p-values only, no confidence intervals

CIs given for each group rather than for


contrasts

“p = NS”, “p <0.05” or other arbitrary


thresholds instead of reporting exact p-values

Numerical information given to an unrealistic


level of precisions
Presentation-
interpretation
“Non significant” interpreted as “no effect”, or “no difference”

Drawing conclusions not supported by data

Significance claimed without data analysis or statistical test


mentioned

Disregard for Type II error when reporting non-significant


results

Missing discussion of the problem of multiple significance


testing if done

Failure to discuss sources of potential bias and confounding


factors
What we need is a brand new idea that has been thoroughly tested!
Questions?

Vous aimerez peut-être aussi