Vous êtes sur la page 1sur 22

VALIDITY AND RELIABILITY

Measurement Reliability
● Reliability = consistency in measurement
– Answers the question: Given that nothing else changes,
will you get the same results if you repeat the
measurement?
● Reliability does not ensure accuracy more than precision
does.
Techniques for Dealing
with Basic Problem of
Reliability
● Test – Retest Method
● Alternate Form Method
● Internal Consistency Method
– Split-half Reliability
– Item-total Reliability
● Inter-rater Reliability
Test-retest Method
● Performing the same survey with the same respondents at
different moments of time.
● The interval between the administration of the two tests can
be either a few hours or several years.
● An assumption made with test-retest reliability is that test
takers do not or have not changed over the time period of the
two administrations.
Alternate-form Method
● This technique requires a researcher to develop two different
forms or versions of the same measure from the same pool of
measurement items (under different environments).
● To eliminate practice effects and other problems with the test-
retest method (i.e., reactivity), test developers often give 2
highly similar forms of the test to the same people at different
times.
Internal Consistency
Method
● This technique is based on the assumption that the various
items in a given measure should correlate positively with one
another.
Split-Half Reliability
● Split-half methods of reliability measure the internal
consistency of a test.
● A measure of consistency where a test is split in two and the
scores for each half of the test is compared with one another.
If the test is consistent it leads the experimenter to believe that
it is most likely measuring the same thing.
Split-half methods also
eliminate or reduce the
following problems:
● The need for two administrations of a test;
● The difficulty of developing another form;
● Carryover and reactivity effects;
● Changes in a person over time.
Item-total reliability
● This technique compares the consistency or correlation of
each item in a measure with the total score across all items of
a measure.
● Calculating an item total reliability involves correlating the
score on one item with the total score on the rest of the items.
It is used with questionnaire measures.
Inter – rater Reliability
● Inter-rater reliability evaluates reliability across different
people.
● It bases its measure to the degree of agreement among
raters/judges.
● This is the best way for assessing reliability when you are
using observation.
To sum it up....
● Test-retest: Same people, different times.
● Alternate-form: Same people, different times, similar test
● Internal consistency: Different questions, same construct.
● Inter-rater: Different people, same test.
Validity
● Validity = “truth” in measurement
– Answers the question: “Are you measuring what you
intend to measure?”
Content Validity
● Content Validity refers to how well a measure covers the
range of meanings, or the dimensions, included within the
concept.
● Face Validity
● Expert Panel Validity

● Factor Analysis
Face Validity
● Face Validity rests on the investigator’s subjective
evaluation of the validity of a measuring instrument.
● In practice, face validity does not relate to the question of
whether an instrument measures what the researcher
intends to measure; rather, it concerns the extent to which
the researcher believes that the instrument is appropriate.
Expert panel Validity
● In expert panel validity, a group of experts in the area
evaluates a measure’s adequacy.
Statistical Procedures: Factor
Analysis
● Factor analysis is a statistical approach that can be used to
analyze interrelationships among a large number of
variables and to explain these variables in terms of their
common underlying dimensions (factors).
● Factor analysis could be used to verify your
conceptualization of a construct of interest.
Criterion Validity

● The degree to which an instrument relates to an external


criterion that is believed to be another indicator or measure
of the same variable that the instrument intends to measure.
● Predictive validity

● Concurrent validity
Construct Validity
● The degree to which a measure relates to other variables as
expected within a general theoretical framework.

R e s e a r c h P r o b le m

Citations R e s e a r c h Q u e s t io n s
required ( a n d / o r h y p o t h e s e s )

C o n c e p t C o n c e p t C o n c e p t

V a r ib l Ve a r i b Vl e a r i a bV l ea r i a bV l ea r i a b le

M e a s uM r ee a s uM r ee a s uM r ee a s uM r ee a s u r e
Final Points

● The concepts of validity and reliability is


inseparable from measurements.
● Reliability testing should be done quantitatively.
Testing qualitatively tends to be problematic.
● Reliability is necessary but insufficient condition to
produce validity.
● Just because a study obtained a good score on
statistical reliability test, it does not ensure validity.
Validity-reliability bulls eye
(Babbie, 1998)

Both
valid & reliable Reliable,
but invalid
Does reliability imply
validity?

● No! Something can be highly reliable, but invalid.


On the other hand, to be valid, something must be
reliable.
The End...

Vous aimerez peut-être aussi