Vous êtes sur la page 1sur 42

Validity and Reliability

Objective: at the end of lecture,


student would be able to:
1. Know concepts & definition of validity &
reliability
2. List importance and impact of validity &
reliability
3. Specify strategies to assess validity &
reliability
4. List strategies to enhance validity & reliability
5. Describe major types of bias
Contents:

1. Validity 2. Reliability

• Definition and synonyms


• Important points
• Accessing validity & reliability
• Strategies to enhance validity & reliability

3. Major types of bias


Medical or epidemiological study, major
consideration is to obtain:
Valid measurement
Reliable measurement
on the exposure factors and
outcomes of interest in the study population

“WITHOUT BIAS and ERRORS” or


to minimize them to the least as possible
To achieve a high standard quality study:

Ensure right answers to study questions


Good the study design
Valid and reliable the measurements
.
Control for any possible bias
Good cooperation between
* research group and
* study population
Screening for fasting blood
cholesterol profile among people

x1 x2 x3
X11, x12, x13 X21, x22, x23 X31, x32, x33

x4 x5 x6

X41, x42, x43 X51, x52, x53 X61, x62, x63


Screening for fasting blood
cholesterol profile among people

x1 x2 x3
X11 X21 X31

x5 x6
x4

X41 X51 X61


Instrument or Research Tool

• “equipment hard ware”


– a red blood cell counter
– a PH meter
– an electronic weighing machine
• “paper ware”
– a questionnaire
– a weekly diet diary
• “people ware”
– observers/investigators
– technicians
How good is the instrument or tool?

• instrument
• tool
• true value truth
• measurement
measurement – valid/accurate
– without bias – precise/reliable
or error
– minimize bias
What is accuracy & precision?

• What do you think of first when talking


about validity & reliability?
• What is the different between validity &
reliability?
• Why are validity & reliability important in
conducting any medical research
- both in laboratory & field setting?
PRECISION

DEFINITION : SYNONYM
A precise • reliability
measurement in
• repeatability
one that has
nearly the same • reproducibility
value each time • consistency
it is measured
• agreement
IMPORTANT POINTS

• precision depends on:


– sample size
– efficiency of the study

• influence on the power of a study

• precision, reliability and consistency


affected by RANDOM ERROR
ASSESSING PRECISION

• Using S.D. s Variance (s2)

• Using Coefficient of variation = S.D.


X
• Using Kappa statistic

• Using Cronbach's alpha


Strategies for enhancing precision

1. standardizing measurement methods

• preparing study protocols


• preparing operations manual
• writing specific guidelines or instructions
for making each measurement
• serving as basis for describing methods
when results are reported
Strategies for enhancing precision
• preparing operations manual
– write down precisely :
- how to prepare environment
and subject
- how to carry out
and record interview
- how to calibrate instrument
Strategies for enhancing precision
• writing specific guidelines or
instructions for making
the measurement

uniform performance over


the duration of study
Strategies for enhancing precision
2. Training and certifying the observers
• improving consistency of measurement
techniques (several observers)
• performing pilot study
– to test the power of techniques
specified in operations manual
Strategies for enhancing precision

3. Refining the instruments


• writing or spelling out questionnaires
and interviews to increase clarity

4. Automating the instruments


• using automatic mechanical devices
Strategies for enhancing precision

5. Repeating the measurement


• impact of random error of any source
can be reduced by

– repeating measurement
– using mean of the two or more
readings
ACCURACY

DEFINITION :

The degree to SYNONYM:


which the results
of a measurement • validity
correspond to • conformity
the true state
or truth
IMPORTANT POINTS

• accuracy is a function of
“SYSTEMATIC RROR”
• Influence on the internal and
external validity of the study
• the greater the systematic error,
the less accurate the variable
IMPORTANT POINTS

• It is attributed to:
– Methodological aspect of
study design or analysis
– Selection of subject
– Quality of information obtained
– Confounding
– Effect Modification
– Misclassification
ASSESSING ACCURACY

Comparison with reference techniques

Gold standards
Strategies for enhancing accuracy

1. Standardizing measurement methods


2. Training and certifying the observers
3. Refining the instruments
4. Automating the instruments
5. Making informal measures
6. Blinding
7. Calibrating the instrument
MAJOR TYPES OF BIAS

• Observer bias
• Subject bias
• Instrument bias
• Information bias
• Selection bias
MAJOR TYPES OF BIAS
• Observer bias
consistent distortion in reporting
measurement by observer

- more intensive measurements in


certain subjects
- ask questions about specific exposures several
times of cases but only once of controls
MAJOR TYPES OF BIAS
• Observer bias

Ex. a tendency to underestimate blood


pressure in cases known to be
receiving treatment

Ex. a more persistent search of medical


records for a history of smoking
cigarettes in patients known to
have lung cancer
MAJOR TYPES OF BIAS

• Subject bias
consistent distortion of
measurement by study subject

- selective recall or reporting of an event


respondent bias
or recall bias
MAJOR TYPES OF BIAS

• Instrument bias
- may result from faulty function of a
mechanical instrument
- may result from inappropriate use of
technique or tool to objective of
measurement
leading questions on questionnaire
MAJOR TYPES OF BIAS

• Information bias
a distortion in the estimate of
effect or variable due to:

* measurement error
* misclassification of subjects on
measurement variable
* invalid measurement
MAJOR TYPES OF BIAS

• Information bias

* incorrect diagnostic criteria


* inadequacies in previously recorded data
* unequal diagnostic surveillance
among exposure study groups
in follow up studies
MAJOR TYPES OF BIAS

• Selection bias
a distortion in the estimate of effect
resulting from how subjects are selected
for study population

“self-selection bias”
MAJOR TYPES OF BIAS
Selection bias can result from:

- choice of groups to be compared


(in all types of studies)
- choice of sampling frame
- loss to follow up or NON RESPONSE
during data collection
(in follow-up studies)
MAJOR TYPES OF BIAS

Selection bias can result from:

- selective surveillance/diagnostic
surveillance varies with exposure status

- more intensive measurements in


certain subjects
SUMMARY

1. Reliability : Precision, Reproducibility

Random Error

2. Validity : Accuracy , Conformity

Systematic Error
Bias
MAJOR TYPES OF BIAS

• Observer Bias
• Subject Bias Recall Bias
Respondent Bias
• Instrument Bias
• Information Bias
• Selection Bias
Reliability and validity of measurement
Reliability Validity
Definition The degree to which a The degree to which a
variable has nearly the variable actually
same value when represents what it is
measured several times supposed to represent

Best way to Comparison among Comparison with a


assess repeated measures reference standard
Reliability Validity
Value to Increase power Increase validity
study to detect effects of conclusions

Threatened Random error Systematic error


by (variance) (Bias)
contributed by : contributed by :
The observer The observer
The subject The subject
The instrument The instrument
.... . .
. . . . ....
. . .. .
.
Illustration of the difference between
Precision and Accuracy
.... .... .
. . . . .
. . .. .
good precision poor precision good precision poor precision

poor accuracy good accuracy good accuracy poor accuracy

Illustration of the difference between


Precision and Accuracy
DIFFERENCES BETWEEN
VALIDITY AND RELIABILITY
Frequency
A C

Unreliable
Invalid

B D
Measurement
True value
DIFFERENCES BETWEEN
VALIDITY AND RELIABILITY
Frequency
A C A- Valid and reliable
B- Valid but not reliable
C- Not valid but reliable
Unreliable D- Not valid and not
Invalid reliable

B D
Measurement
True value

Vous aimerez peut-être aussi