Académique Documents
Professionnel Documents
Culture Documents
Standards
Qualitative research methods involve the systematic
collection, organisation, and interpretation of textual
material derived from talk or observation. It is used in the
exploration of meanings of social phenomena as
experienced by individuals themselves, in their natural
context.25 Qualitative research is still regarded with
scepticism by the medical community, accused of its
subjective nature and the absence of facts. Although the
adequacy of guidelines has been vigorously debated within
this cross-disciplinary field,6,7 scientific standards, criteria,
and checklists do exist.3,811 However, as Chapple and
Rogers12 point out, medical researchers often encounter
difficulties when they try to apply guidelines designed by
social scientists, which deal with issues important in their
own discipline, but which are not necessarily generically
valid as scientific standards.
Hamberg and colleagues,13 for example, claim that the
established criteria for scientific rigour in quantitative
research cannot be applied to qualitative studies. Referring
to Lincoln and Guba,2 they suggest alternative criteria:
credibility, dependability, confirmability, and transLancet 2001; 358: 48388
Section for General Practice, Department of Public Health and
Primary Health Care, University of Bergen, Ulriksdal 8C, N-5009
Bergen, Norway; and Department of General Practice and Research
Institute, University of Copenhagen, Denmark (Prof K Malterud MD)
(e-mail: kirsti.malterud@isf.uib.no)
Specific challenges
Although there are many similarities between qualitative
and quantitative research methods, some procedures are
very different, because of the different nature and
assumptions of the data and questions to be answered. The
effect of an investigator on a study, the principles and
consequences of sampling, and the process of organisation
and interpretation during analysis, all affect research, and
are closely related to different aspects of validity (panel 2).
Reflexivity
A researchers background and position will affect what
they choose to investigate, the angle of investigation, the
483
For personal use. Only reproduce with permission from The Lancet Publishing Group.
Metaphor
Description
Reflexivity
Preconceptions
The researchers
backpack
Previous personal and professional experiences, prestudy beliefs about how things
are and what is to be investigated, motivation and qualifications for exploration of
the field, and perspectives and theoretical foundations related to education and
interests
Theoretical frame
of reference
Theories, models, and notions applied for interpretation of the material and for
understanding a specific situation
Metapositions
The participating
observers sidetrack
Strategies for creating adequate distance from a study setting that you are
personally involved in
Transferability
External validity
The range and limitations for application of the study findings, beyond the context
in which the study was done
Share preconceptions
Establish metapositions
Transferability
Interpretation
and analysis
484
For personal use. Only reproduce with permission from The Lancet Publishing Group.
485
For personal use. Only reproduce with permission from The Lancet Publishing Group.
486
For personal use. Only reproduce with permission from The Lancet Publishing Group.
Conclusions
Qualitative and quantitative methods
When qualitative and quantitative approaches are
combined, the methods are often applied in sequential
order. Semistructured interviews or observational data
might, for example, be used to explore hypotheses or
variables when planning a large epidemiological study,
resulting in enhanced sensitivity and accuracy of survey
questions and statistical strategy. In such instances,
qualitative studies might be thought of as precursors of
real science. However, qualitative studies can also be
added to quantitative ones, to gain a better understanding
of the meaning and implications of the findings. More
creative combinations are seen in triangulation.3 The idea
of triangulation originated from a craft used by land
surveyors, who increase the validity of a map by
incorporating measures from different angles. Multiple and
diverse observations can enrich the description of a
phenomenonie, an elephant looks very different when
seen from above or below. Someone reading a report might
gain a better understanding of what goes on in a medical
consultation if data from various sources, such as doctors
and patients,22 have been combined. The aim of
triangulation is to increase the understanding of complex
phenomena, not criteria-based validation, in which
agreement among different sources confirms validity.
Quantification of phenomena or categories can be done
to gain an overview of qualitative material, but the
application of such numbers should be done with caution.
Quasistatistical analysis of textual material, also termed
content analysis, has gained some popularity, and computer
programs are available to count the occurrence of specific
words or utterings in a text. However, the scientific logic of
statistics and transferability is far from accomplished in a
non-representative sample in which questions were not
asked in a standardised way to all participants. We do not
know to whom the findings can be transferred, and we do
not know the potential answers from informants who just
did not mention the issue. Prevalences, distributions, and
8
9
10
11
12
13
487
For personal use. Only reproduce with permission from The Lancet Publishing Group.
488
For personal use. Only reproduce with permission from The Lancet Publishing Group.
T86981045
Kirsti Malterud, 2001, Qualitative research: standards, challenges, and guidelines. Lancet, 358: 48388.
Guideline
:
1. ?
2. ?
3. ?
:
1. ()
?
:
1. ?
2. ?
1. ?
2. ?
3. ?
4. ?
5. ?
1. ?
2. ?
3. ?
1.
?
2.
?
3. ?
4.
?
5.
1. ?
2. (insight)?
3. (
)?
T86981045
4. ?
1. ()
()?
2. ?
3. ?
4. ?
5. ?
1. ?
2. ?
1.
?
228
InterViews
Truths?" Aglie laughed... Still, amid a11 the nOsense there are
some unimpeachable truths. G.entlemen, would you follow me to the
window?"
He threw open the shutters dramatically and pointed. At the corner
of the narrow street and the broad avenue, stood a little wooden kiosk,
where, presumably, lottery tickets were sold
Gent1emen ," he S31 invite you to go and measure that kiosk. You
wil1 see that the length of the CQunter is one hundred and forty-nine
centimeters1n other words, one hundred-billionth of the distance
between the earth and the sun. The height at the rear, one hundred and
seventy-six centimeters, divided by the width of the window, fifty-six
centimeters, is 3.14. The height at the front is nineteen decimeters , equal,
in other words , to the number of years of the Greek lunar cycle. The
sum of the heights of the two front corners and the two rear corners is
one hundred and ninety times two plus one hundred and seventy-six
times two , which equ a1s seven hundred and thi-two the date of the
victory at Poitiers. The thickness of the counter is 3.10 centimeters, and
the width of the cornice of the window is 8.8 centime Replacing the
numbers before the decimals by the corresponding letters of the
alphabet, we obtain C for ten and H for eig or Cl0HS , which is the
formula for naphthalene."
Fantastic," 1 said. You did all these measurements?" (Eco, p. 288).
13
.
The Social Construction
of Validity
1 now turn to the issue of how to get beyond the extremes of a
subjective relativism w l1 ere everything can mean everything, and an
absolutist quest for the one and only true , objective meaning.
Verification of knowledge is commonly discussed in the social
sciences in relation to the concepts of reliabili validi and generalizability. The main emphasis in this chapter will be on validation,
treating the interdependence of philosophical understandings of truth,
social science concepts of validity, and the practical issues of verifying
interview knowledge. Classical conceptions of truth will be included
as well as a postmodern approach leading to validity as soal construction. The ensuing practical consequences for interview research
involve an emphasis on the qualiry of the craftsmanship of research
and on communicative and pragmatic forms of validation
229
230
InterViews
r
Vl
ews
ub
je
Ct8"; and
base on subjective inte
rpretations. "
231
Some qualitative researchers have a different attitude toward questions of validity, reliability , and generalizability. These are simply
ignored or dismissed as some opprsive positivist concepts that
hamper a creative and emancipatory qualitative research. Other qualiive researchers-Lincoln and Guba (1985) , for instance-have
gone beyond the relativism of a rampant antipositivism and have
reclaimed ordinary language terms to discuss the truth value of their
findings , using concepts such as trustworthiness , credibility, dependability, and confirmability.
From a postmodern perspective issues of reliabili validity, and
generalizability are sometimes discarded as leftovers from a modernist
correspondence theory of truth. There are multiple ways of knowing
and mUltiple truths , and the concept of validity indicates a firm
boundary line between truth and nontruth. In co hereto Lather
(1995) , from a feminist post-structural frame valorizing practice,
add es validity as aI} incitement to discourse , a fertile obsession ,
and atte ll1pts to reinscribe validity in ways that use the postmodern
problematic to loosen the master code of positivism.
1 will return to external critiques of the trustworthine of interview
findings in the book's conclusion, Chapter 15. In the present chapter
1will attempt to conceptualize generalizability, reliabili and validity
in ways appropriate to qualitative research. The discussion represen
a rather moderate postmodernism; although rejecting the notion of
an objective universal truth , it accepts the possibility of specific local,
personal, and community forms of truth , with a focus on daily life and
local narrative (Kvale, 1992; Rosenau , 1992). The present approach
18 not to ject the concep of reliability, generalizability, and validity,
but to reconceptualize them in forms relevant to interview research.
The understanding of verification starts in the liv~d world and daily
language where issues of reliable observations, of generalization from
one case to another , of valid arguments , are part of everyday so al
mteractlO.
Generalizability
A persistent questio posed to interview studies is hher..
results are neralizabl!;. In everyday life w generalize moT
232
InterViews
a;e b sought by natural science-oriented schools such as behaviorism, whereas the uniquene of the individual person has dominated
in humanistic psychology. In a ppstmodern approach the quest for
universal knowledge , as well as the cult of the individually unique , is
replaced by an emphasis on the heterogeneity and contextuality of
knowledge , with a shift from generalization to ntextualization.
Forms of Generalizability. The issue f qualitative generalization
has been treated particularly in relation to case studies. Stake (1994)
provides this definition: Qualitative case study is characterized by the
main researcher spending substantial time, on site, personally in
contact with activities and operations of the case, reflecting, revising
meanings of what is going on" (p. 242). Three forms of generalizability will be outlined based on Stake's discussion of generalization from
case studies-naturalistic, statistical , and alytic.
Naturalistic generalization rests on personal experience: It dev e\ ops
for the person as a function of experience; it derives from tacit
knowledge of how things are and leads to expections rather than
formal predictions; it may become verbalized, thus passing from tacit
knowledge to explicit propositional knowledge.
Statistical generalization is formal and explicit: It is based on subjects se\ ected at random from a population. With the use of inferential
statistics the confidence level of generalizing from the selected sample
to the population at large can be stated in probability coefficients.
When the interviewees are selected at random and the interview
findings quantified , the findings may be subjected to statistical generalization. Thus for the correlation found between talkativity and grade
point average it was possible to state that there was only 1/1 ,000
probability that this was a chance finding limited to the 30 randomly
chosen pupils of the grade study (Chapter 12, Questions Posed to an
Interview Text).
233
234
InterViews
Thus it is the receiver of the information who determines the applicability of a finding to a new situation. . . . Like generalizations in law, clinical
generalizations are the responsibility of the receiver of information
rather than the original generator of inforrnation, and the evaluator must
be careful to provide sufficient information to make such generalizations
possible. (Kennedy, 1979, p. 672)
Reseaher and Re ader Geralization. There is an issue here of who
should conduct the analytical generalization from the qualitative
research case-the researcher or the reader and the user? How much
should the researcher formalize and argue generalizations or leave the
generalizing to the reader? In science, it has commonly been the
researcher who builds up and argues for the generality of his or her
findings-through statistical procedures or by an assertational logic.
For the legal and the clinical cases discussed by Kennedy, it is the judge
or the clinician who makes the judgment of whether a pievious case
was sufficiently analogous to be used as a precedent for the present
case. In both instances it is paramount that sufficient evidence is
provided by the researcher for the analytic generalizations to be made
An example of a reader generalization that can be mentioned is Freud's
therapeutic case stories, where his descriptions and analyses have been
so vivid and convincing that readers today still generalize many of the
findings to current cases.
235
whereby Marx's analysis of wage labor became increasingly generalizable to the situation of workers at large.
A third target of generalization is what could belocating situations that we believe are ideal and exceptional and studying them to
see what goes on there. As examples, Schofield mentions school classes
with unusual intellectual gains and also well-functioning racially
desegregated schools. In constructivist and postmodern approaches
the emphasis on the could be" is extended from preconceived ideals
to more open forms. Donmoyer (1990) thus advocates the use of case
studies to teach readers to envisage possibilities, to expand and enrich
the repertoire of social constructions available to practitioners and
others. We may here add the interest in ethnographic studi cases
demonstrating the rich varieties of human behavior, also indicating
possible ranges for our own society. Gergen (1992) depicts the construction of new worlds as one potential of a postmodern psychology.
Rather than telling it like it is ," the challenge is to tell it as it may
become." A "generative" theory is designed to unseat conventional
thought and thereby open new and desirable alternatives for thought
and action. Rather than mapping only what or predicting future
cultural trends , research becomes one means of transforming culture.
236
InterViews
different answers (Chapter 8, Leading Questions). 1nterviewer reliability in the grade study was discussed on the basis of the categorizations of the subjec answers (Chapter 11 , Control of An alysis). Under
transcription of interviews, an example was given of the intersubjec
tive reliability of the transcripts when the same passage was typed by
two different persons (Chapter 9, Transcription Reliability and Validity). During categorization of the grading interviews, percentages were
reported for the intersubjective agreement between two coders for the
same interviews (Chapter 11 , Control of An alysis). Though increasing
the reliability of the interview findings is desirable rder to counteract haphazard subjectivi a. strong emphasis on reliability may
counteract creative innovations and variability.
237
Box 13.1
Validation at Seven Stages
3. Interviewing. Validity here per ns to the trustworthiness of the subject's reports and the quality of the interviewing itself, which should include a careful questioning
as to the meaning of what is said and a continual checking
of the information obtained as a validation in situ.
238
InterViews
111'
239
240
InterViews
The
241
242
InterViews
243
244
InterViews
asumptions
245
is decided through the argumentation of the participants in a discourse. ln a hermeneutical approach to meaningful action as a text,
Ricoeur (1971) rejects the position that all interpretations of a text
are equal; the logic of validation allows us to move between the two
limits of dogmatism and skepticism. lnvoking the hermeneutical circle
and criteria of falsiili he describes validation as an argumentative discipline comparable to the juridical procedures of legal interpretation. Validation is based on a logic of uncertainty and of qualitative probabili where it is always possible to argue for or against
an interpretation, to confront interpretions and to arbitrate between
them.
A communicative approach to validity is found in several approaches in the social sciences. ln psychoanalysis the validity of an
interpretation is worked out in a dialogue between patient and th
pist. It is also implied in evaluation studies of social systems; House
(1980) has thus emph~sized that in system evaluation, research does
not mainly concern predicting events, but rather whether the audience
of a report can see new relations and answer new but relevant
questions. Cronbach (1980) has advocated a discursive approach
where validiry rests on public discussion. The interpretation of a test
is going to remain open and unsettled, the more so because of the role
that values play in action based on tests; the aim for a research report
is to advance sensible discussion-and,The more we learn, and the
franker we are with ourselves and our clientele , the more valid the use
of tests will become" (p. 107). ln a discussion of narrative research,
Mishler (1990) has conceprualized validation as the social construction of knowledge. Valid knowledge claims are established in a
discourse through which the results of a srudy come to be viewed as
sufficiently trustworthy for other investigators to rely upon in their
own work.
When conversation is the ultimate context within which knowledge
is to be nnderstood, as argued by Rorry (Chapter 2 , lnterviews in
Three Conversations) , the narure of the discourse becomes essential.
There is today a danger that a conception of truth as dialogue and
communicative validation may become empty global and positive
undifferentiated terms , without the necessary conceprual and theoretical differentiations worked out. Some specific questions concerning
the how, why, and who of communication will now be raised.
246
InterViews
ue
247
tence in the specific area. Taking a lead from the use of reflecting
teams" in psychotherapy, where the one-way mirror is reversed and
the family in treatment views the therapeutic team's discussions of
their interpersonal interaction (An dersen , 1987), we might also reverse the direction in research and have the subjects listen to and
comment on the researchers' conversations about their interviews.
Validation through negotiations of the community of scholars is
nothing new; in the natural sciences the acceptance of the scientific
community has been the last , ultimate criterion for ascertaining the
truth of a proposition. What is relatively new in qualitative research
in the soal sciences is the emphasis on truth as negotiated in a local
context, with extension of the interpretative community to include
the subjects investigated and the lay public. Communicative validation
approximates an educational endeavor where truth is developed in a
communicative process, with both researcher and subjects learning
and changing through the dialogue.
A heavy reliance on intersubjective validation may, however , also
imply a lack of work on the part of the researcher and a lack of
confidence in his or her interpretations, with an unwillingne to take
responsibility for the interpretations. There may be a general populist
trend when leaving the validation of interpretations to the readers; as
m reaer response validation, with an abdication to the ideology of a
consumer 80 ety:
Power and Truth. Different professional communities may construct knowledge different and conflicts may arise about which
professions have the right to decide what is valid knowledge within a
field , such as health , for example. Furthermore, there is the specific
issue of who decides who is a competent and legitimate member of
the interpretative community. The selection of members of the community to make decisions about issues of truth and value is considered
crucial for the results in many cases, such as in the selection of
members of a jury, or of a committee to examine a doctoral candidate,
or of an academic appointment committee.
Haberm's consensus theory of truth is based on the ideal of a
dominance-free dialogue , which is a delibete abstraction from the
webs of power relationships within real-life discourses , and again in
contrast with Lyotard's postmodern understanding of a scientific con-
248
InterViews
versation as a game of power. More generally, scientists are not Purchased to find truth , but to augment power: The games of scientific
language become the games of the rich, in which whoever is wealthiest
has the best chance of being right. An equation between wealth.
efficiency and truth is thus established" (Lyotard, 1984, p. 45).
Pragmatic Validity
Pragmatic validation is verification in the literal sense- to make
true." To pragmatists, truth is watever assists us to take actions that
produce the desired results. Kn owledge is action rather than observation , the effectiveness of our knowledge beliefs is demonstrated by the
effectiveness of our action. In the pragmatic validation of a knowledge
claim, justification is replaced by application. Marx stated in his
second thesis on Feuerbach that the question of whether human
thought can lead to objective truth is not a theoretical but a practical
one. Man must prove the truth , that is, the reality and power of his
thinking in practice. An d his 11 th thesis is more pointed; the philosophers have only interpreted the world differently, what matters is
changing the world.
A pragmatic concept of validity goes farther than communication;
it represents a stronger knowledge claim than an agreement through
a dialogue. Pragmatic validation rests on observations and interpretations , with a commitment to act on the interpretations.Actions
speak louder than words." With the emphasis on instigating change,
a pragmatic knowledge interest may counteract a tendency of social
constructionism to circle around in endless interpretations and a
plunge of postmodern analyses into infinite deconstructions.
A pragmatical knowledge interest in helping pn change is
intrinsic to the therapeutic interview, where communication of interpretations serves to instigate changes in the patient. For naturalistic
inquiry, Li ncoln and Guba (1985) have gone farther than consensual
validation and pointed to action-oriented quality criteria for qualitative research , such as an inquiry enhancing the level of understanding of the participants and their ability to take action , empowering
them to take increased control of their lives. Action arch goes from
descriptions of social conditions to actions that can change the very
249
250
InterViews
visited. The coresearchers first developed knowledge through discussions among themselves , by role playing, and thereafter by raising their
concerns directly with their client families. Reason discusses the
validity in this cooperative inquiry, and emphasizes the need to get
beyond a mere consensus collusion where the researchers might band
together as a group in defense of their anxieties , which may be
overcome by a continual interaction between action and reflection
throughout the participatory inquiry.
How. The forms of pragmatic validation vary: There can be a
patien t' s reactions to the psychoanalyst' s interpretation of his or her
251
Who. The question of who" involves the researcher and the users
of the knowledge produced. Patton (1980) emphasizes the credibility
of the researcher as an important criterion of whether a research
report is accepted or not as a basis for action. The question of who"
also involves ethical and political issues. Who is to decide the direction
of change? There may be personal resistance to change in a therapy
as well as conflicting vested interests in the outcome of an action study.
Thus, regarding audience validation in system evaluation, who are the
stakeholders that will be included in the decisive audience: the funding
agency, the leaders of the system evaluated, the employees, or the
clients of the system?
Power and Truth. Pragmatic validation raises the issue of power and
truth in social research: Where is the power to decide what the desired
results of a study will be, or the direction of change; what values are
to constitute the basis for action? An d , more generally, where is the
power to de de what kinds of truth seeking are to be pursued, what
research questions are worth funding? Following Foucault we should
here beware of localizing power to specific persons and their int.entions , and instead analyze the netlike organization and multiple fields
of power-knowledge dynamics.
f
Validity of the Validity Question
1 have argued here for integrating validation into the crafmanship
of research, and for extending the concept of validation from observation to also include communication about, and pragmatic effects of,
knowledge claims. The understanding of validity as craftsmanship, as
communication and action , does not replace the importance of precise
observations and logical argumentation, but includes broader conceptions of the nature of truth in social research. The conversational and
pragmatic aspects of knowledge have within a positivist tradition been
regarded as irrelevant, or secondary, to obtaining objective observations; in a postmodern conception of knowledge the very conversation
about , and the application of, knowledge become essential aspec of
the construction of a social world. Rather than providing fixed criteria, communicative and pragmatic validation refer to extended ways
of posing the question of validity in social research.
252
InterViews
nd tI re
s
1
o
nVln
iC
m
ga
s
tr
ue
be
au11
and god. Appeals to external certification , or offial validity stamps of approval, then become secondarv
Valid resch would in sense be research t makes questions of
vali.dity superfluous.
14
'.i
T76991048
:
(supporting evidence)(Yin, 1994)
(argument)?(Yin, 1994)?
(precision of description)(Kennedy, 1979)?
(longitudinal information) (Kennedy, 1979)?
(multidisciplinary assessment)?
3.
- Issues of verification() do not belong to some separate stages of an
investigation, but should be addressed throughout the entire research process.
(): (Interviewer
reliability)? (leading questions)
intersubjective
reliability?
1
T76991048
Whether the percentages were reported for the intersubjective agreement between
two coders for the same interviews
4. Validity as quality of Craftmanship()
-
Miles and Huberman(1994) outline in detail tactits for testing and confirming
qualitative findings, including checking for
(Checking for representativeness and
researcher effects)
(triangulation)
(weighing the evidence)
Outliers (meaning of outliers)
(extreme cases)
(following up on surprices)
(looking for negative evidence)
(if-then tests)
(ruling out spurious relations)
(replicating a finding)
(rival)
interpretations().
A common critique of research interviews is that their findings are not valid
because the subjects reports may be false. Different questions posed to interview
texts led to different answers. The forms of validation differ for the different questions
to the interview texts. For example, a critical follow-up to a statistical analysis and the
coherence of the interpretations,
Richardson(1994) proposed the concept of (triangulation)
-
T76991048
5. (Communicative Validity)
Communicative validity involves testing the validity of knowledge claims in a
dialogue. What is a valid observation is decided through the argumentation of the
participants in a discourse.
Critique criteria
House (1980) stated that in system evaluation, research does not mainly concern
predicting events, but rather whether the audience of a report can see new
relations and answer new but relevant questions (
).
Valid knowledge claims are established in a discourse through which the results
of a study come to be viewed as sufficiently trustworthy for other
investigators to rely upon in their own work(
?)
We might also reverse the direction in research and have the subjects listen to
and comment on researchers conversations about their interviews.
6. Pragmatic Validity:
The effectiveness of our knowledge beliefs is demonstrated by the effectiveness
of our action. In pragmatic validity, justification is replaced by application().
Pragmatic validation rests on observations and interpretations, with a commitment to
act on the interpretations.
Lincoln and Guba(1985) have gone farther than consensual validation and
pointed to action-oriented quality criteria for qualitative research, such as
An inquiry enhancing the level of understanding of the participants(
) and their ability to take action(), empowering them to
take increased control of their lives().
T76991048
Who. The question of who involves the researcher and the users of the
knowledge produced. Patton(1980) emphasizes the credibility of the researcher
as an important criterion of whether a research report is accepted or not as a basis
for action.
Power and Truth. Where is the power to decide what the desired results of a
study will be, the direction of change; what values are to constitute the basis for
action?
Daniel T. L. Shek
X. Y. Han
Vera M. Y. Tang
The materials in this paper have been included in the following publications:
Shek, D. T. L., Tang, M. Y., & Han, X. Y. (2005). Evaluation of Evaluation Studies Using
Qualitative Research Methods in the Social Work Literature (1990-2003): Evidence That
Constitutes a Wake-Up Call. Research on Social Work Practice, 15, 180-194.
2005
1990-2003
241 70-100
qualitative
2
3
4
5
6
7
Bryman1988
123
4
5
6Denzin Lincoln1998
1
2
3
45
6
2004
1
quantitative
492
qualitative 1,238 Brun1997
1982 1992 54
Dellgran Hojer2001
1979 1998 89 14
36Roberts1989
2003
14 14 7
Thyer1989
p.309
positivistic paradigm
constructivist paradigm
LeCompte
Goetz1982internal reliability
external reliability
internal validity
external validity
triangulation
Tschudi, 1989
Guba Lincoln1981
credibility
fittingness
auditability
confirmability
Sandelowski1986
1
2
3
4
5
67
Huberman Miles1994
5
1
4
5
Anastas, 2004;
Drisko, 1997Drisko1997
Drisko1997
Drisko1997
Anastas2004
1
2
general
Patton, 1990specific
1
Drisko1997
Patton1990
2
3
Drisko1997
Steier1991
p.2
Rosenau1992
p.114
4
Huberman &
Miles, 1994
5
Drisko1997
reliability
validity
intra-rater reliabilityinter-rater
reliability 6
7
Anastas2004
communalLincoln1998
community
Rizzo, Corsaro Bates1992
peer checking
member checking
Sandelowski1986
p.34
audit trail, Huberman & Miles, 1994
9
Anastas2004
10
negative cases 11
Huberman
Miles (1994
Gambrill, 1999; Gomory, 2000a, 2000bLincoln & Guba, 2000;
Ryan, 1998
12Drisko1997
11990-2003
qualitativeevaluation
2
12 28
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
1228
2
1N=22
N=8
2
3
13
10
44
95
3
6
67
8
9285
auditabilityaudit trail
13
101811
8
12
1
1990 2003
11
generalspecific
Patton1990
2 3 9
4 5
Janesick1998
12
p.42
truth valueconsistency
Rubin2000
pp.12-13
Salcido Cota1995
p.44
nave realism
13
4
2002
1979 30
p.63
1980 2003
180
2000 (MSW)(MSSM)
200220
p. 66
14
Cautious Use
Critical Use
Gambrill, 1999, 2004; Gomory, 2000a, 2000b
Steier, 1991
Gambrill1999
Criteria Formulation
Thyer1989
p.309
Guba Lincoln1994
p.114
Hammersley1990
realitytruth
15
truthp.61Hammersley 1992
truthrelevance
Seale, 2002
Smith Deemer2000
p.878
anything goes
p.878
p.894
Curriculum Development
16
17
nave realism
18
References
Anastas, J.W. (2004). Quality in qualitative evaluation: Issues and possible answers. Research
on Social Work Practice, 14(1), 57-65.
Atherton, C. R., & Bolland, K. A. (2002). Postmodernism: A dangerous illusion for social
work. International Social Work, 45, 421-433.
Batchelor, J., Gould, N., & Wright, J. (1999). Family centers: A focus for the children in need
debate. Child and Family Social Work, 4, 197-208.
Bloland, H. G. (1995). Postmodernism and higher education. Journal of Higher Education,
66(5), 521-559.
Bronstein, L. R., & Kelly, T. B. (1998). A multidimensional approach to evaluating
school-linked services: A school of social work and county public school partnership.
Social Work in Education, 20(3), 152-164.
Brun, C. (1997). The process and implications of doing qualitative research: An analysis of
54 doctoral dissertations. Journal of Sociology and Social Welfare, 24(4), 95-112.
Bryman, A. (1988). Quantity and quality in social research. London: Unwin Hyman.
Cigno, K., & Gore, J. (1999). A seamless service: Meeting the needs of children with
disabilities through a multi-agency approach. Child and Family Social Work, 4,
325-335.
Davis, D., Ray, J., & Sayles, C. (1995). Ropes Course Training for youth in a rural setting:
At first I thought it was going to be boring. Child and Adolescent Social Work
Journal, 12(6), 445-463.
De Anda, D. (2001). A qualitative evaluation of a mentor program for at-risk youth: The
participants perspective. Child and Adolescent Social Work Journal, 18(2), 97-117.
Dellgran, S., & Hojer, P. (2001). Mainstream is contextual: Swedish social work research
dissertations and theses. Social Work Research, 25(4), 243-252.
Denzin, K., & Lincoln, Y.S. (1998). The landscape of qualitative research. Thousand Oaks,
CA: Sage.
Denzin, N. K., & Lincoln, Y. S. (Eds.). (2000). Handbook of qualitative research. Thousand
Oaks, Calif.: Sage.
Derezotes, D. (1995). Evaluation of the Late Nite Basketball Project. Child and Adolescent
Social Work Journal, 12(1), 33-50.
19
Drisko, T.W. (1997). Strengthening qualitative studies and reports: Standards to promote
academic integrity. Journal of Social Work Education, 33(1), 185-197.
Dvell, F., & Jordan, B. (2001). How low can you go? Dilemmas of social work with
asylum seekers in London. Journal of Social Work Research and Evaluation, 2,
189-205.
Elks, M. A., & Kirkhart, K. E. (1993). Evaluating effectiveness from the practitioner
perspective. Social Work, 38(5), 554-563.
Erera, P. I. (1997). Empathy training for helping professionals: model and evaluation. Journal
of Social Work Education, 33(2), 245-260.
Gambrill, E. (2004). The future of evidence-based social work practice. In Thyer, B. and Kazi,
M. A. F. (Eds.), International Perspectives on Evidence-based Practice in Social Work
(pp. 215-234). London: Venture Press.
Gambrill, E. (1999). Evidence-based practice: An alternative to authority-based practice.
Families in Society, 80, 341-350.
Gardner, F. (2003). Critical reflection in community-based evaluation. Qualitative Social
Work, 2(2), 197-212.
Gellis, Z. D. (2001). Using a participatory research approach to mobilize immigrant minority
family caregivers. Journal of Social Work Research, 2, 267-282.
Goicoechea-Balbona, A., Barnaby, C., Ellis, I., & Foxworth, V. (2000). AIDS: the
development of a gender appropriate research intervention. Social Work in Health Care,
30(3), 19-37.
Gomory, T. (2000a). A fallibilistic response to Thyer's theory of theory-free empirical
research in social work practice. Journal of Social Work Education, 37, 26-50.
Gomory, T. (2000b). Critical realism (Gomory's blurry theory) or positivism (Thyer's
theoretical myopia): Which is the prescription for social work research? Journal of
Social Work Education, 37, 67-78.
Guba, E. G., & Lincoln, Y. S. (1981). Effective evaluation. San Francisco: Jossey-Bass.
Guba, E. G., & Lincoln, Y. S. (1994). Competing paradigms in qualitative research. In Denzin,
N.K. and Lincoln, Y.S. (Eds.), Handbook of qualitative research (pp.105-117).
Thousand Oaks, CA; Sage.
Hammersley, M. (1990). Reading ethnographic research: A critical guide. London: Longman.
20
21
Patton, M. Q. (1990). Qualitative evaluation and research methods. Newbury Park, Calif.:
Sage.
Patton, M. Q. (2002). Two decades of developments in qualitative inquiry. Qualitative Social
Work, 1(3), 261-283.
Platt, D. (2001). Refocusing childrens services: Evaluation of an initial assessment process.
Child and Family Social Work, 6, 139-148.
Rehr, H., & Epstein, I. (1993). Evaluating the Mount Sinai Leadership Enhancement Program:
a developmental perspective. Social Work in Health Care, 18, 79-99.
Ringma, C., & Brown, C. (1991). Hermeneutics and the social sciences: An evaluation of the
function of hermeneutics in a consumer disability study. Journal of Sociology and
Social Welfare, 18, 57-73.
Rivard, J. C., Johnsen, M. C., Morrissey, J. P., & Starrett, B. E. (1999). The dynamics of
interagency collaboration: how linkages develop for child welfare and juvenile justice
sectors in a system of care demonstration. Journal of Social Service Research, 25,
61-82.
Rizzo, T. A., Corsaro, W. A., & Bates, J. E. (1992). Ethnographic methods and interpretive
analysis: Expanding the methodological options of psychologists. Developmental
Review, 12, 101-123.
Roberts, C.A. (1989). Research methods taught and utilized in social work. Journal of Social
Service Research, 13(1), 65-86.
Rosenau, P. M. (1992). Post-modernism and the social sciences insights, inroads and
intrusions. Princeton, NJ: Princeton University Press.
Rubin, A. (2000). Editorial: Social work research at the turn of the millennium: Progress and
challenges. Research on Social Work Practice, 10, 9-14.
Ryan, K. (1998). Advantages and challenges of using inclusive evaluation approaches in
evaluation practice. American Journal of Evaluation, 19(1), 101-122.
Salcido, R. M., & Cota, V. (1995). Cross-cultural training for child welfare workers when
working with Mexican-American clients. Journal of Continuing Social Work Education,
6, 39-46.
Sandelowski, M. (1986). The problem of vigor in qualitative research. Advances in Nursing
Science, 8(3), 27-37.
Schwandt, T. A. (1996). Farewell to criteriology. Qualitative Inquiry, 2(1), 58-72.
22
Smith, J. K., & Deemer, D. K. (2000). The problem of criteria in the age of relativism. In
Denzin, N.K. and Lincoln, Y.S. (Eds.), Handbook of qualitative research (pp.877-896).
Thousand Oaks, CA; Sage.
Sokal, A., & Bricmont, J. (1998). Fashionable nonsense: Postmodern intellectuals sense of
science. New York: Picador USA.
Steier, F. (Ed.). (1991). Research and reflexivity. London: Sage.
Stout, K. D. (1991). A continuum of male controls and violence against women: A teaching
model. Journal of Social Work Education, 27, 305-319.
Sun, A. (1998). Organizing an international continuing education program: A preliminary
study based on participants perspectives. Journal of Continuing Social Work Education,
6(4), 23-29.
Thyer, B.A. (1989). First principles of practice research. British Journal of Social Work, 19,
309-323.
Tschudi, F. (1989). Do qualitative and quantitative methods require different approaches to
validity? In Kvale, S. (Ed.), Issues of validity in qualitative research (p.109-134). Lund,
Sweden: Studentlitteratur.
Vera, M. I. (1990). Effects of divorce groups on individual adjustment: A multiple
methodology approach. Social Work Research and Abstracts, 26, 11-20.
Walsh-Bowers, R., & Basso, R. (1999). Improving early adolescents peer relations through
classroom creative drama: an integrated approach. Social Work in Education, 21, 23-32.
Whipple, E. E., Grettenberger, S. E., & Flynn, M. S. (1997). Doctoral research education in a
social context: Development and implementation of a model outreach course. Journal of
Teaching in Social Work, 14, 3-26.
2002262-66
23
1. 1 5
1990 10
11 6 5
Ringma C. 1991
& Brown
C.
24
.
I25 I
Vera M.I.
II II
12
K.E.
17
11 6
Epstein I.
160 3 1
2
3
2 14 2
3 39 .
24
1. ()
1995
Davis
D. Ray
J. & Sayles
C.
266
Derezotes
D.
1995
J.
1 6
3
45
151
1 3
226 2
328
1 370
1 25555
2 22
Salcido
R.M. &
Cota V.
1995
436 9
.
53 240
61
23
36
25
1. ()
10
Erera P.I.
1997
ETP
I
.
ETP
22
18
4
5
II 4
III 4
29
20
11
1997
Whipple
E.E.
Grettenberg
er S.E. &
Flynn
M.S.
(1)
(2)
10
12
Mowbray
C.T. &
Bybee D.
1998
163
(1)
(2)
Client
Level Assessment Measure
(3)
(4)
(5) 12
12
26
1. ()
13
126 1 Key
Informant Survey
4
5
( 1)
245 2
5 23
/
/
12
3
1
14
Bronstein 1998
L.R. &
Kelly T.B.
-
19 11
6
2 2
30
4
15
Sun A
1998
22
27
1. ()
16
17
Walsh
-Bowers
R. &
Basso R.
1999
Batchelor 1999
J. Gould
N. & Wright
J.
I 15
12 12
7
1
I
2 5
5
3
4
II 15
17
11
7
II1 10
2 10
3
4
5
I
1 1
36
29 2
30
2 2
3 3
28
4
5
1. ()
18
Cigno K. 1999
& Gore J.
151
4
iii
/ 3
iii
iv
3
4
5
275 1
362 275 5
47
15
6 2
19
Ligon J. 1999
Markward
M.J. &
Yegidis B.
L.
1 14 120
122
2 38 27
20
Rivard
J.C.
Johnsen
M.C.
Morrissey
J.P. &
Starrett
B.E.
1999
1
2
3
6
63
74
21
Goicoechea 2000
-Balbona
HIV
A.
Barnaby
C. Ellis
I. &
Foxworth
V.
9 HIV
29
1. ()
22
Netting
F.E. &
Williams
F.G.
2000 9
1105
40 32
23
289
44 24
21
56 1
23
De Anda
D.
2001
mentor
18 2 1~
2 2 ~
24
Dvell F.
& Jordan
B.
2001
4 4
2
2
25
Gellis
Z.D.
2001
35
29
20
30
1. ()
26
Platt D.
2001
36
22
41 47
47 10
10
8
27
Hill M. 2002
Dillane
J.
Bannister
J. & Scott
S.
19
23
53 20 53 20 //
//
24 10 24 10 //
//
3 3
4 4
51 126
63 70 51 63 70
5 5
28
Gardner F. 2003
Sharon Action
0.96
1.00
0.96
0.96
0.96
0.82
31
2. 6 12
6
10
11
12
audit
trail
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
0.96
0.96
1.00
1.00
0.96
1.00
0.96
0.96
0.86
0.96
32
99(2)
2011/05/11
2005
19902003
241 70100
Patton2002
1. positivisticparadigm
LeCompte Goetz1982
1. internalreliability
2. externalreliability
3.
4.
5.
2.
internalvalidity
externalvalidity
Tschudi1989triangulation
constructivistparadigm
Guba Lincoln1981
1. credibility
2. fittingness
3. auditability
4. confirmability
Sandelowski1986
12
3
45
67
1
99(2)
2011/05/11
3.
4.
artisticandevocativeparadigm
criticalparadigm
Huberman Miles1994
5
1.
2.
3.
4.
5.
Drisko1997
1.
2. 1
3.
Drisko1997
99(2)
4.
5.
6.
5.
6.
7.
2011/05/11
Anastas2004
1.
2.
3.
4.
99(2)
2011/05/11
general
Patton,1990specific
Huberman&Miles,1994
Drisko1997
intraraterreliability
interraterreliability
99(2)
2011/05/11
peer checking
communal
memberchecking
audittrail
Sandelowski1986
Huberman&Miles,1994
Anastas2004
10
11
negativecases
12
Gambrill,1999;Gomory,2000a,2000b
Lincoln&Guba,2000;Ryan,1998
Research Article
Abstract
The grounded theory approach has been used in nursing research since 1970. The latest
methodological books describe the research process in detail. However, there are many problems
involved in the grounded theory approach, which especially need to be considered by a novice
researcher. One of these problems is the question of how deeply and widely the researcher
should familiarize her- or himself with the research topic before the empirical study. The
problems also include the need to focus the research problem and to choose the sampling
method. Data analysis is a multistage process, which demands from the researcher both
sensitivity and time to work out the findings which emerge from the data. In this paper, the
grounded theory approach is described as a process undertaken by the novice researcher.
The purpose of this paper is to discuss the challenges of the grounded theory approach and the
problems encountered by a researcher using the method for the first time.
Key words
grounded theory approach, inductive research, method of nursing research, research method.
INTRODUCTION
The grounded theory approach is both a way to do
qualitative research and a way to create inductive
theory. The approach was developed by the sociologists Glaser and Strauss in the USA. Their first
book Discovery of Grounded Theory was published in
1967. Awareness of the fact that the grounded theory
approach comes from sociology helps one to understand better some of its steps and processes. The
grounded theory approach has traditionally been
part of the postpositivist inquiry paradigm, but
it approaches the constructivist inquiry paradigm
(Annells, 1996). The approach has been further
developed by Glaser (1978), Chenitz and Swanson
(1986) and Strauss and Corbin (1990). Using the
grounded theory approach, it is possible to study
the meanings of events for people. This is based on
148
Data collection
The data are generally collected by using interviews,
observations, diaries or other written documents
or a combination of some methods (Glaser, 1978;
Chenitz & Swanson, 1986; Strauss & Corbin, 1990).
The grounded theory literature emphasizes (Glaser,
1978; Chenitz & Swanson, 1986; Strauss & Corbin,
1990) the need to combine many data collection
methods. In nursing studies, however, mainly only
interview data are used. In that case, the research
does not necessarily account for the social structural
influences on the experiences of the respondents
(Benoliel, 1996).
Theoretical sampling refers to a data collection
process which aims to create theory, which means that
coding and analysing serve as data collection in the
next phase. When theoretical sampling is used, data
149
Data analysis
Data analysis is like a discussion between the actual
data, the created theory, the memos and the researcher. Such discussion takes place when the data
are broken down, conceptualized and put back
together in new ways. The data give rise to the codes
and the categories which combine the codes. The
categories and hypotheses must be verified against
the data by comparing the categories with each
other, with the data and with the researchers conclusions (Glaser & Strauss, 1967; Glaser, 1978;
Chenitz & Swanson, 1986; Strauss & Corbin, 1990).
If the researcher has preliminary assumptions or
150
151
152
DISCUSSION
There are many methodological books and articles
concerning the grounded theory approach in nursing
science, but also in other sciences (Glaser & Strauss,
1967; Glaser, 1978; Chenitz & Swanson, 1986;
Charmaz, 1990; Strauss & Corbin, 1990; Anderson,
1991; Hutchinson, 1993; Keddy et al., 1996; Wilson
& Hutchinson, 1996). The essential point for a
novice researcher is whether he/she is committed to
certain methodological books and their instructions
or tries to apply several different instructions and
views. There are different ways of discovering the
grounded theory, and this may confuse a novice
researcher during the complicated and difficult research process.
The use of the grounded theory approach requires
a novice to commit him/herself to a time-consuming
and long process. To discover the grounded theory
assumes a dialogue between the data and the theory.
For a novice researcher, applying the grounded
theory approach is more or less a compromise
between the demands of the approach and the
resources which he/she has available.
The literature does not contain many descriptions
about the grounded theory approach as a researchers
process. It often surprises the researcher by its
challenges. During the past few years the different
ways of using the grounded theory approach have
also been discussed (Annells, 1996; Melia, 1996).
In their book titled Basics of Qualitative Research,
Strauss and Corbin (1990) describe systematically
how to analyse the data. The method can be a good
tool for a novice, but it may also hinder the way to
create inductive theory. Burns (1989) considers the
basis of evaluating qualitative research and emphasizes both flexibility and accuracy. Probably the
creation of theory by the means of the data and the
simultaneous observance of instructions is the biggest
challenge of the grounded theory approach.
REFERENCES
Anderson MA. Use of grounded theory methodology in a
descriptive research design. ABNF J. 1991; 2: 2832.
Annells M. Grounded theory method: philosophical perspectives, paradigm of inquiry and postmodernism.
Qualitative Hlth Res. 1996; 6: 379393.
Backman K, Hentinen M. Model for the self-care of homedwelling elderly. J. Advanced Nurs. 1999; 30 (in press).
Bailey P. Finding your way around qualitative methods in
nursing research. J. Advanced Nurs. 1997; 25: 1822.
Baker C, Wuest J, Stern N. Method slurring: The grounded
theory/phenomenology example. J. Advanced Nurs. 1992;
17: 13551360.
Benoliel JQ. Grounded theory and nursing knowledge.
Qualitative Hlth Res. 1996; 6: 406428.
Brandriet LM. Gerontological nursing: application of ethnography and grounded theory. J. Gerontol. Nurs. 1994; 20:
3340.
Burns N. Standards for qualitative research. Nurs. Sci. Q
1989; 2: 4452.
Charmaz K. Discovering chronic illness: using grounded
theory. Social Sci. Med. 1990; 30: 11611172.
Chen YL. Conformity with nature: a theory of Chinese
American elders health promotion and illness prevention
processes. Adv. Nurs. Sci. 1996; 19: 1726.
Chenitz WC, Swanson JM. From Practice to Grounded
Theory. Qualitative Research in Nursing. Massachusetts:
Addison-Wesley, 1986.
Duffy M. Transcending options: creating a milieu for practising high level wellness. PhD Thesis. Salt Lake City. The
University of Utah, 1983 (unpubl.).
Frenn M. Older adults experience of health promotion: a
theory for nursing practice. Public Hlth Nurs. 1996; 13:
6571.
Glaser BG. Theoretical Sensitivity. San Francisco, USA: The
Sociology Press, 1978.
Glaser BG. Strauss AL. The Discovery of Grounded Theory:
Strategies for Qualitative Research. Chicago: Aldine, 1967.
Hutchinson SA. Qualitative Approaches in Nursing
Research. Grounded Theory: the Method. Nln Publications
1993; 192535: 180212.
Janhonen S. The Core of Nursing as Seen by Nurse Teachers
in Finland, Norway and Sweden. Acta Universitatis
Ouluensis. Series D. Medica 245. Oulu: University of Oulu
Printing Center. 1992.
Keddy B, Sims SL, Stern PN. Grounded theory as a feminist
research methodology. J. Advanced Nurs. 1996; 23: 448
453.
Kyngs H, Hentinen M. Meaning attached to compliance
with self-care, and conditions for compliance among
young diabetics. J. Advanced Nurs. 1995; 21: 729736.
Lancaster W, Lancaster J. Models and model building in
nursing. Adv. Nurs. Sci. 1981; 3: 3142.
Mackie JL. Causes and conditions. Am. Philosoph. Quart.
1995; 2: 245264.
May K. Management of detachment and involvement in
pregnancy by first-time expectant fathers. PhD Thesis. San
Francisco: The University of California, 1979.
Meleis AF. Theoretical Nursing: Development and Progress.
Philadelphia: J.B. Lippincott Company, 1991.
Melia KM. Rediscovering Glaser. Qualitative Hlth Res.
1996; 6: 368377.
Munchall PL, Oiler C. Nursing Research. A Qualitative
Perspective. New York: USA: National League for
Nursing Press, 1993.
153
99
T26974070
2011 5 11
1.
2.
1.
theoretical sensitivity
priori assumptions
2.
Backman, K., & Kyngs, H. A. (1999). Challenges of the grounded theory approach to a novice
researcher. Nursing and Health Sciences, 1, 147153.
1.
2. theoretical
sampling
selective sampling
1.
2.
3.
1. Glaser1978input
drugless trip1saturation
drugless trip
(1)
(2)
axial coding
Backman, K., & Kyngs, H. A. (1999). Challenges of the grounded theory approach to a novice
researcher. Nursing and Health Sciences, 1, 147153.
1.
2.
(1)
(2)
theoretical memos
1.
2.
3.
Backman, K., & Kyngs, H. A. (1999). Challenges of the grounded theory approach to a novice
researcher. Nursing and Health Sciences, 1, 147153.
1. coding
familiestypology family
the use of strategy while coding a family
2. fit and
relevance
work
(1) fit and relevance
continuous comparative analysis
(2) work
3.
4.
drugless trip
Glaser1978,
1998
http://lonelydissertator.blogspot.com/2009/07/classic-grounded-theory-community-and.html
Backman, K., & Kyngs, H. A. (1999). Challenges of the grounded theory approach to a novice
researcher. Nursing and Health Sciences, 1, 147153.
International Journal for Quality in Health Care; Volume 19, Number 6: pp. 349 357
Advance Access Publication: 14 September 2007
10.1093/intqhc/mzm042
Abstract
Background. Qualitative research explores complex phenomena encountered by clinicians, health care providers, policy
makers and consumers. Although partial checklists are available, no consolidated reporting framework exists for any type of
qualitative design.
Objective. To develop a checklist for explicit and comprehensive reporting of qualitative studies (indepth interviews and
focus groups).
Results. Items most frequently included in the checklists related to sampling method, setting for data collection, method of data
collection, respondent validation of ndings, method of recording data, description of the derivation of themes and inclusion of
supporting quotations. We grouped all items into three domains: (i) research team and reexivity, (ii) study design and (iii) data
analysis and reporting.
Conclusions. The criteria included in COREQ, a 32-item checklist, can help researchers to report important aspects of the
research team, study methods, context of the study, ndings, analysis and interpretations.
Keywords: focus groups, interviews, qualitative research, research design
randomized controlled trials [5]. Systematic reviews of qualitative research almost always show that key aspects of study
design are not reported, and so there is a clear need for a
CONSORT-equivalent for qualitative research [6].
The Uniform Requirements for Manuscripts Submitted to
Biomedical Journals published by the International Committee
of Medical Journal Editors (ICMJE) do not provide reporting
guidelines for qualitative studies. Of all the mainstream biomedical journals (Fig. 1), only the British Medical Journal (BMJ)
has criteria for reviewing qualitative research. However, the
guidelines for authors specically record that the checklist is
not routinely used. In addition, the checklist is not comprehensive and does not provide specic guidance to assess some
of the criteria. Although checklists for critical appraisal are
available for qualitative research, there is no widely endorsed
reporting framework for any type of qualitative research [7].
We have developed a formal reporting checklist for
in-depth interviews and focus groups, the most common
methods for data collection in qualitative health research.
Address reprint requests to: Allison Tong, Centre for Kidney Research, The Childrens Hospital at Westmead, NSW 2145,
Australia. Tel: 61-2-9845-1482; Fax: 61-2-9845-1491; E-mail: allisont@health.usyd.edu.au, allisont@chw.edu.au
International Journal for Quality in Health Care vol. 19 no. 6
# The Author 2007. Published by Oxford University Press on behalf of International Society for Quality in Health Care; all rights reserved
349
Methods. We performed a comprehensive search in Cochrane and Campbell Protocols, Medline, CINAHL, systematic reviews
of qualitative studies, author or reviewer guidelines of major medical journals and reference lists of relevant publications for
existing checklists used to assess qualitative studies. Seventy-six items from 22 checklists were compiled into a comprehensive
list. All items were grouped into three domains: (i) research team and reexivity, (ii) study design and (iii) data analysis and
reporting. Duplicate items and those that were ambiguous, too broadly dened and impractical to assess were removed.
A. Tong et al.
Figure 1 Development of the COREQ Checklist. *References [26, 27], References [6, 28 32], Author and reviewer
guidelines provided by BMJ, JAMA, Lancet, Annals of Internal Medicine, NEJM.
These two methods are particularly useful for eliciting
patient and consumer priorities and needs to improve the
quality of health care [8]. The checklist aims to promote
complete and transparent reporting among researchers and
indirectly improve the rigor, comprehensiveness and credibility of interview and focus-group studies.
350
Basic definitions
Qualitative studies use non-quantitative methods to contribute new knowledge and to provide new perspectives in
health care. Although qualitative research encompasses a
broad range of study methods, most qualitative research
Methods
Development of a checklist
Search strategy. We performed a comprehensive search for
published checklists used to assess or review qualitative
studies, and guidelines for reporting qualitative studies in:
Medline (1966Week 1 April 2006), CINAHL (1982
Week 3 April 2006), Cochrane and Campbell protocols,
systematic reviews of qualitative studies, author or reviewer
guidelines of major medical journals and reference lists of
relevant publications. We identied the terms used to index
the relevant articles already in our possession and performed
a broad search using those search terms. The electronic
databases were searched using terms and text words for
research (standards), health services research (standards) and
qualitative studies (evaluation). Duplicate checklists and
detailed instructions for conducting and analysing qualitative
studies were excluded.
Data extraction. From each of the included publications, we
extracted all criteria for assessing or reporting qualitative
studies. Seventy-six items from 22 checklists were compiled
into a comprehensive list. We recorded the frequency of each
item across all the publications. Items most frequently
included in the checklists related to sampling method, setting
for data collection, method of data collection, respondent
351
A. Tong et al.
Table 1 Consolidated criteria for reporting qualitative studies (COREQ): 32-item checklist
No Item
Guide questions/description
.............................................................................................................................................................................
352
Table 2 Items included in 22 published checklists: Research team and reexivity domain
Item
References
............................................................................................................................................................................
a
a
b
b
b
[26]
[27]
[6]
[28]
[32]
[13] [15] [14] [17] [33] [34] [35] [16] [19] [36] [7] [37] [23] [38] [39] [22] BMJ
............................................................................................................................................................................................................................................
Other publications, bSystematic review of qualitative studies; BMJ, British Medical Journaleditors checklist for appraising qualitative research); , item included in the checklist.
353
A. Tong et al.
354
References
.................................................................................................................................................................
a
a
b
b
b
[32] [13] [15] [14] [17] [33] [34] [35] [16] [19] [36] [7] [37] [23] [38] [39] [22] BMJ
............................................................................................................................................................................................................................................
Study design
Methodological orientation, ontological or
epistemological basis
Samplingconvenience, purposive
Setting
Characteristics and description of sample
Reasons for participant selection
Non-participation
Inclusion and exclusion, criteria
Identity of the person responsible for recruitment
Sample size
Method of approach
Description of explanation of research to participants
Level and type of participation
Method of data collection, e.g. focus group,
in-depth interview
Audio and visual recording
Transcripts
Setting and location
Saturation of data
Use of a topic guide, tools, questions
Field notes
Changes and modications
Duration of interview, focus group
Sensitive to participant language and views
Number of interviews, focus groups
Time span
Time and resources available to the study
Other publications, bSystematic review of qualitative studies; BMJ, British Medical Journaleditors checklist for appraising qualitative research; , item included in the checklist.
References
.......................................................................................................................................................................
a
a
b
b
b
[28]
[32]
[13] [15] [14] [17] [33] [34] [35] [16] [19] [36] [7] [37] [23] [38] [39] [22] BMJ
............................................................................................................................................................................................................................................
Other publications, bSystematic review of qualitative studies; BMJ, British Medical Journaleditors checklist for appraising qualitative research, , item included in the checklist.
355
Respondent validation
Limitations and generalizability
Triangulation
Original data, quotation
Derivation of themes explicit
Contradictory, diverse, negative cases
Number of data analysts
In-depth description of analysis
Sufcient supporting data presented
Data, interpretation and conclusions
linked and integrated
Retain context of data
Explicit ndings, presented clearly
Outside checks
Software used
Discussion both for and against the
researchers arguments
Development of theories, explanations
Numerical data
Coding tree or coding system
Inter-observer reliability
Sufcient insight into meaning/perceptions
of participants
Reasons for selection of data to support ndings
New insight
Results interpreted in credible, innovative way
Eliminate other theories
Range of views
Distinguish between researcher and
participant voices
Proportion of data taken into account
A. Tong et al.
356
Discussion
The COREQ checklist was developed to promote explicit
and comprehensive reporting of qualitative studies (interviews and focus groups). The checklist consists of items
specic to reporting qualitative studies and precludes generic
criteria that are applicable to all types of research reports.
COREQ is a comprehensive checklist that covers necessary
components of study design, which should be reported. The
criteria included in the checklist can help researchers to
report important aspects of the research team, study
methods, context of the study, ndings, analysis and
interpretations.
At present, we acknowledge there is no empiric basis that
shows that the introduction of COREQ will improve the
quality of reporting of qualitative research. However this is
no different than when CONSORT, QUOROM and other
reporting checklists were introduced. Subsequent research
has shown that these checklists have improved the quality of
reporting of study types relevant to each checklist [5, 25],
and we believe that the effect of COREQ is likely to be
similar. Despite differences in the objectives and methods of
quantitative and qualitative methods, the underlying aim of
transparency in research methods and, at the least, the theoretical possibility of the reader being able to duplicate the
study methods should be the aims of both methodological
approaches. There is a perception among research funding
agencies, clinicians and policy makers, that qualitative
research is second class research. Initiatives like COREQ
are designed to encourage improvement in the quality of
reporting of qualitative studies, which will indirectly lead to
improved conduct, and greater recognition of qualitative
research as inherently equal scientic endeavor compared
with quantitative research that is used to assess the quality
and safety of health care. We invite readers to comment on
COREQ to improve the checklist.
References
1. Moher D, Schulz KF, Altman D. The CONSORT statement:
revised recommendations for improving the quality of reports
of parallel-group randomized trials. JAMA 2001;285:1987 91.
2. Moher D, Cook DJ, Eastwood S et al. Improving the quality of
reports of meta-analyses of randomised controlled trials: the
QUOROM statement. Quality of Reporting of Meta-analyses.
Lancet 1999;354:1896 900.
4. Stroup DF, Berlin JA, Morton SC et al. Meta-analysis of observational studies in epidemiology: a proposal for reporting.
Meta-analysis Of Observational Studies in Epidemiology
(MOOSE) group. JAMA 2000;283:2008 12.
357
25. Delaney A, Bagshaw SM, Ferland A et al. A systematic evaluation of the quality of meta-anlyses in the critical care literature.
Crit Care 2005;9:575 82.
99(2) t86981095
Consolidatedcriteriaforreportingqualitativeresearch(COREQ):a32itemchecklist
forinterviewsandfocusgroup
InternationaljournalforQualityinHealthCare,Vol.19,No.6,pp.349357,2007
(checklist)
:
()
:
CochraneandCampbellProtocolsMedlineCINAHL
72 22
:(1) (2) (3)
(reporting)
Item
Guide Questions/description
Domain 1:
1. /facilitator
?
2.
PhDMD?
3.
4.
?
5.
?
6.
?
7. ?e.g.
8.
?
99(2) t86981095
Domain 2 :
9.
10.
11.
12.
13.
(setting)
14.
15.
16.
17.
18.
19. /
20.
21.
22.
23. Transcript returned
?..
?
?
?
??
?
?
?
??
??
?
??
?
?
Domain 3:
24.
?
25. Coding tree
Coding Tree?
26.
?
27.
()
28.
?
29.
/?
?
30. ?
31.
?
32.
?
ABSTRACT
PURPOSE We wanted to review and synthesize published criteria for good qualitative research and develop a cogent set of evaluative criteria.
METHODS We identified published journal articles discussing criteria for rigorous research using standard search strategies then examined reference sections
of relevant journal articles to identify books and book chapters on this topic. A
cross-publication content analysis allowed us to identify criteria and understand
the beliefs that shape them.
RESULTS Seven criteria for good qualitative research emerged: (1) carrying out
ethical research; (2) importance of the research; (3) clarity and coherence of the
research report; (4) use of appropriate and rigorous methods; (5) importance of
reflexivity or attending to researcher bias; (6) importance of establishing validity
or credibility; and (7) importance of verification or reliability. General agreement
was observed across publications on the first 4 quality dimensions. On the last
3, important divergent perspectives were observed in how these criteria should
be applied to qualitative research, with differences based on the paradigm
embraced by the authors.
CONCLUSION Qualitative research is not a unified field. Most manuscript and
grant reviewers are not qualitative experts and are likely to embrace a generic
set of criteria rather than those relevant to the particular qualitative approach
proposed or reported. Reviewers and researchers need to be aware of this tendency and educate health care researchers about the criteria appropriate for
evaluating qualitative research from within the theoretical and methodological
framework from which it emerges.
Ann Fam Med 2008;6:331-339. DOI: 10.1370/afm.818.
INTRODUCTION
CORRESPONDING AUTHOR
331
VO L. 6, N O. 4
C R I T E R I A F O R Q U A L I TAT I V E R E S E A R C H
tive Health Research) and books are dedicated to qualitative methods in health care,15-17 and a vast literature
describes basic approaches of qualitative research,18,19
as well as specic information on focus groups,20-23
qualitative content analysis,24 observation and ethnography,25-27 interviewing,28-32 studying stories33,34
and conversation,35-37 doing case study,38,39 and action
research.40,41 Publications describe strategies for sampling,42-45 analyzing, reporting,45-49 and combining qualitative and quantitative methods50; and a growing body
of health care research reports ndings from studies
using in-depth interviews,51-54 focus groups,55-57 observation,58-60 and a range of mixed-methods designs.61-63
As part of a project to evaluate health care
improvements, we identied a need to help health care
researchers, particularly those with limited experience in qualitative research, evaluate and understand
qualitative methodologies. Our goals were to review
and synthesize published criteria for good qualitative
research and develop a cogent set of evaluative criteria
that would be helpful to researchers, reviewers, editors,
and funding agencies. In what follows, we identify the
standards of good qualitative research articulated in
the health care literature and describe the lessons we
learned as part of this process.
METHODS
A series of database searches were conducted to
identify published journal articles, books, and book
chapters offering criteria for evaluating and identifying
rigorous qualitative research.
Data Collection and Management
With the assistance of a librarian, a search was conducted in December 2005 using the Institute for Science (ISI) Web of Science database, which indexes
a wide range of journals and publications from 1980
to the present. Supplemental Appendix 1, available
online-only at http://www.annfammed.org/cgi/
content/full/6/4/331/DC1, describes our search
strategy. This search yielded a preliminary database
of 4,499 publications. Citation information, abstracts,
and the number of times the article was cited by other
authors were exported to a Microsoft Excel le and an
Endnote database.
After manually reviewing the Excel database, we
found and removed a large number of irrelevant publications in the physical and environmental sciences
(eg, forestry, observational studies of crystals), and
further sorted the remaining publications to identify
publications in health care. Among this subset, we read
abstracts and further sorted publications into (1) publications about qualitative methods, and (2) original research
ANNALS O F FAMILY MEDICINE
using qualitative methods. For the purposes of this analysis, we reviewed in detail only publications in the rst
category. We read each publication in this group and
further subdivided the group into publications that (1)
articulated criteria for evaluating qualitative research, (2)
addressed techniques for doing a particular qualitative
method (eg, interviewing, focus groups), or (3) described
a qualitative research strategy (eg, sampling, analysis).
Subsequent analyses focused on the rst category;
however, among publications in the second category, a
number of articles addressed the issue of quality in, for
example, case study,39 interviewing,28 focus groups,22,64,65
discourse,66 and narrative67,68 research that we excluded
as outside the scope of our analysis.
Books and book chapters could not be searched
in the same way because a database cataloging these
materials did not exist. Additionally, few books on
qualitative methods are written specically for health
care researchers, so we would not be able to determine
whether a book was or was not contributing to the
discourse in this eld. To overcome these challenges,
we used a snowball technique, identifying and examining books and book chapters cited in the journal
articles retrieved. Through this process, a number of
additional relevant journal articles were identied as
frequently cited but published in nonhealth care or
nonindexed journals (eg, online journals). These articles were included in our analysis.
Analysis
We read journal articles and book chapters and prepared notes recording the evaluative criteria that
author(s) posited and the world view or belief system
in which criteria were embedded, if available. When
criteria were attributed to another work, this information was noted. Books were reviewed and analyzed
differently. We read an introductory chapter or two to
understand the authors beliefs about research and prepared summary notes. Because most books contained a
section discussing evaluative criteria, we identied and
read this section, and prepared notes in the manner
described above for journal articles and book chapters.
An early observation was that not all publications
offered explicit criteria. Publications offering explicit
evaluative criteria were treated as a group. Publications
by the same author were analyzed and determined to
be sufciently similar to cluster. We examined evaluative criteria across publications, listing similar criteria in thematic clusters (eg, importance of research,
conducting ethically sound research), identifying the
central principle or theme of the cluster, and reviewing
and rening clusters. Publications that discussed evaluative criteria for qualitative research but did not offer
explicit criteria were analyzed separately.
332
VO L. 6, N O. 4
C R I T E R I A F O R Q U A L I TAT I V E R E S E A R C H
Assumptions
Positivism
Realism
Interpretivism
RESULTS
333
VO L. 6, N O. 4
C R I T E R I A F O R Q U A L I TAT I V E R E S E A R C H
Definition
Triangulation
Peer review/
debriefing
External audits/
auditing
Validity
A number of publications framed the
concept of validity in the context
Member checking
of quantitative research, where it
typically refers to the best available
approximation to the truth or falsity
of propositions.142(p37) Internal validity
refers to truth about claims made regarding the relationship between 2 variables. External validity refers to
the extent to which we can generalize ndings. Across
publications, different ideas emerged.
Understanding the concept of validity requires
understanding beliefs about the nature of reality. One
may believe that there can be multiple ways of understanding social life and reality, even multiple realities.
This view of reality emerges from an interpretivist perspective. Hallmarks of high-quality qualitative research
include producing a rich, substantive account with
strong evidence for inferences and conclusions and
then reporting the lived experiences of those observed
and their perspectives on social reality, while recognizing that these could be multiple and complex and that
the researcher is intertwined in the portrayal of this
experience. The goal is understanding and providing
a meaningful account of the complex perspectives and
realities studied.
In contrast, research may be based on the belief
that there is one reality that can be observed, and this
reality is knowable through the process of research,
albeit sometimes imperfectly. This perspective is
typically associated with a positivist paradigm that
underlies quantitative research, but also with the realist
paradigm found in some qualitative research. Qualitative research based on this view tends to use alternative terms for validity (eg, adequacy, trustworthiness,
accuracy, credibility) and emphasizes striving for truth
through the qualitative research process, for example,
by having outside auditors or research participants validate ndings. An important dimension of good qualitative research, therefore, is plausibility and accuracy.
Verification or Reliability
Divergent perspectives were observed on the appropriateness of applying the concept of veriability or
reliability when evaluating qualitative research. As
ANNALS O F FAMILY MEDICINE
334
VO L. 6, N O. 4
C R I T E R I A F O R Q U A L I TAT I V E R E S E A R C H
seen as nonscientic and lacking rigor.88,125 Their argument is compelling and suggests reliability and validity
should not be evaluated at the end of the project, but
should be goals that shape the entire research process,
inuencing study design, data collection, and analysis
choices. A second approach is to view the criteria of
validity and reliability as inappropriate for qualitative
research, and argue for the development of alternative
criteria relevant for assessing qualitative research.*
This position is commonly based on the premise
that the theoretical and methodological beliefs informing quantitative research (from whence the criteria
of reliability and validity come) are not the same as
the methodological and theoretical beliefs informing
qualitative research and are, therefore, inappropriate.136
Cogent criteria for evaluating qualitative research
are needed. Without well-dened, agreed-upon, and
appropriate standards, qualitative research risks being
evaluated by quantitative standards, which can lead to
assimilation, preferences for qualitative research that
are most compatible with quantitative standards, and
rejection of more radical methods that do not conform to quantitative criteria.94 From this perspective
emerged a number of alternative criteria for evaluating
qualitative research.
Alternative criteria have been open to criticism.
We observed such criticism in publications challenging the recommendation that qualitative research using
such techniques as member checking, multiple coding,
external audits, and triangulation is more reliable, valid,
and of better quality.72,82,90,91,112,127,143 Authors challenging this recommendation show how techniques such
as member checking can be problematic. For example,
it does not make sense to ask study participants to
check or verify audio-recorded transcribed data. In
other situations, study participants asked to check
or verify data may not recall what they said or did.
Even when study participants recall their responses,
there are a number of factors that may account for
discrepancies between what participants recall and the
researchers data and preliminary ndings. For instance,
the purpose of data analysis is to organize individual
statements into themes that produce new, higher-order
insights. Individual contributions may not be recognizable to participants, and higher-order insights might
not make sense.82 Similar issues have been articulated
about the peer-review and auditing processes127,143 and
some uses of triangulation.130 Thus, alternative criteria
for evaluating qualitative research have been posited
and criticized on the grounds that such criteria (1) cannot be applied in a formulaic manner; (2) do not necessarily lead to higher-quality research, particularly if
* References 72, 81, 82, 85, 94, 114, 118, 129, 136.
these techniques are poorly implemented; and (3) foster the false expectation among evaluators of research
that use of one or more of these techniques in a study
is a mark of higher quality.72,81,90,91,112,123,127
A third approach suggests the search for a cogent
set of evaluative criteria for qualitative research is
misguided. The eld of qualitative research is broad
and diverse, not lending itself to evaluation by one
set of criteria. Instead, researchers need to recognize
each study is unique in its theoretical positioning and
approach, and different evaluative criteria are needed.
To fully understand the scientic quality of qualitative research sometimes requires a deep understanding
of the theoretical foundation and the science of the
approach. Thus, evaluating the scientic rigor of qualitative research requires learning, understanding, and
using appropriate evaluative criteria.123,124,135,137
DISCUSSION
There are a number of limitations of this analysis to
be acknowledged. First, although we conducted a
comprehensive literature review, it is always possible
for publications to be missed, particularly with our
identication of books and book chapters, which relied
on a snowball technique. In addition, relying on publications and works cited within publications to understand the dialogue about rigor in qualitative methods is
imperfect. Although these discussions manifest in the
literature, they also arise at conferences, grant review
sessions, and hallway conversations. Ones views are
open to revision (cf, Lincolns103,144), and relationships
with editors and others shape our ideas and whom we
cite. In this analysis, we cannot begin to understand
these inuences.
Our perspectives affect this report. Both authors
received doctoral training in qualitative methods in
social science disciplines (sociology/communication and
anthropology) and have assimilated these values into
health care as reviewers, editors, and active participants
in qualitative health care studies. Our training shapes
our beliefs, so we feel most aligned with interpretivism. This grounding inuences how we see qualitative
research, as well as the perspectives and voices we examine in this analysis. We have been exposed to a wide
range of theoretical and methodological approaches for
doing qualitative research, which may make us more
inclined to notice the generic character of evaluative
criteria emerging in the health care community and take
note of the potential costs of this approach.
In addition, we use 3 common paradigmsinterpretivism, realism, and positivismin our analysis. It
is important to understand that paradigms and debates
about paradigms are political and used to argue for
335
VO L. 6, N O. 4
C R I T E R I A F O R Q U A L I TAT I V E R E S E A R C H
336
VO L. 6, N O. 4
C R I T E R I A F O R Q U A L I TAT I V E R E S E A R C H
15. Pope C, Mays N, eds. Qualitative Research in Health Care. 2nd ed.
London: BMJ Books; 2000.
16. Crabtree BF, Miller WL, eds. Doing Qualitative Research. Thousand
Oaks, CA: Sage Publications; 1992.
17. Crabtree BF, Miller WL, eds. Doing Qualitative Research. 2nd ed.
Thousand Oaks, CA: Sage Publication; 1999.
18. Holloway I. Basic Concepts for Qualitative Research. Oxford: Blackwell Science; 1997.
19. Malterud K. Qualitative research: standards, challenges, and
guidelines. Lancet. 2001;358(9280):483-488.
20. Asbury J. Overview of focus group research. Qual Health Res.
1995;5(4):414-420.
21. Freeman T. Best practice in focus group research: making sense
of different views. J Adv Nurs. 2006;56(5):491-497.
22. Kidd PS, Parshall MB. Getting the focus and the group: enhancing analytical rigor in focus group research. Qual Health Res.
2000;10(3):293-308.
23. Sim J. Collecting and analysing qualitative data: issues raised by
the focus group. J Adv Nurs. 1998;28(2):345-352.
24. Graneheim UH, Lundman B. Qualitative content analysis in nursing
research: concepts, procedures and measures to achieve trustworthiness. Nurse Educ Today. 2004;24(2):105-112.
30. Griffiths P, Gossop M, Powis B, Strang J. Researching hidden poulations of drug users by privileged access interviewers: methodological and practical issues. Addiction. 1993;88(12):1617-1626.
References
34. Kleinman A. The Illness Narratives. New York, NY: Basic Books; 1998.
35. Mishler EG. The Discourse of Medicine: Dialectics of Medical Interviews. Norwood, NJ: Ablex; 1984.
6. Meinert CL. Clinical Trials: Design, Conduct and Analysis. New York,
NY: Oxford University Press; 1986.
7. Goffman E. Asylums. New York, NY: Doubleday Anchor; 1961.
8. Glaser B, Strauss A. Awareness of Dying. New York, NY: Aldine; 1965.
39. Yin RK. Enhancing the quality of case studies in health services
research. Health Serv Res. 1999;34(5 Pt 2):1209-1224.
337
VO L. 6, N O. 4
C R I T E R I A F O R Q U A L I TAT I V E R E S E A R C H
43. Morse J. Strategies for sampling. In: Morse J, ed. Qualitative Nursing Research: A Contemporary Dialogue. Newbury Park, CA: Sage
Publications; 1991:127-145.
67. Mishler E. Validation in inquiry-guided research: The role of exemplars in narrative studies. Harv Educ Rev. 1990;60(4):415-442.
68. Bailey PH. Assuring quality in narrative analysis. West J Nurs Res.
1996;18(2):186-194.
48. Miller WL, Crabtree BF. Qualitative analysis: how to begin making
sense. Fam Pract Res J. 1994;14(3):289-297.
73. Elder NC, Miller WL. Reading and evaluating qualitative research
studies. J Fam Pract. 1995;41(3):279-285.
52. England M, Tripp-Reimer T. Imminent concerns of filial caregivers reporting recent experiences of crisis. Int J Aging Hum Dev.
2003;56(1):67-88.
77. Hall JM, Stevens PE. Rigor in feminist research. ANS Adv Nurs Sci.
1991;13(3):16-29.
53. Gubrium JF, Rittman MR, Williams C, Young ME, Boylstein CA.
Benchmarking as everyday functional assessment in stroke recovery. J Gerontol B Psychol Sci Soc Sci. 2003;58(4):S203-S211.
55. Duncan MT, Morgan DL. Sharing the caring: family caregivers
views of their relationships with nursing home staff. Gerontologist.
1994;34(2):235-244.
80. Inui TS, Frankel RM. Evaluating the quality of qualitative research:
a proposal pro tem. J Gen Intern Med. 1991;6(5):485-486.
57. Bradley EH, McGraw SA, Curry L, et al. Expanding the Andersen
model: the role of psychosocial factors in long-term care use.
Health Serv Res. 2002;37(5):1221-1242.
83. Krefting L. Rigor in qualitative research: the assessment of trustworthiness. Am J Occup Ther. 1991;45(3):214-222.
58. Jervis LL. The pollution of incontinence and the dirty work of caregiving in a U.S. nursing home. Med Anthropol Q. 2001;15(1):84-99.
84. Kuzel AJ, Engel JD, Addison RB, Bogdewic SP. Desirable features of
qualitative research. Fam Pract Res J. 1994;14(4):369-378.
85. Engel JD, Kuzel AJ. On the idea of what constitutes good qualitative inquiry. Qual Health Res. 1992;2(4):504-510.
86. Maxwell J. Understanding validity in qualitative research. Harv
Educ Rev. 1992;62(3):279-300.
60. Ware NC, Lachicotte WS, Kirschner SR, Cortes DE, Good BJ. Clinician experiences of managed mental health care: a rereading of
the threat. Med Anthropol Q. 2000;14(1):3-27.
61. Wittink MN, Barg FK, Gallo JJ. Unwritten rules of talking to doctors about depression: integrating qualitative and quantitative
methods. Ann Fam Med. 2006;4(4):302-309.
88. Morse J, Barrett M, Mayan M, Olson K, Spiers J. Verification strategies for establishing reliability and validity in qualitative research.
Intl J Qual Meth. 2002;1(2):1-19.
62. Rabago D, Barrett B, Marchand L, Maberry R, Mundt M. Qualitative aspects of nasal irrigation use by patients with chronic sinus
disease in a multimethod study. Ann Fam Med. 2006;4(4):295-301.
63. Ahles TA, Wasson JH, Seville JL, et al. A controlled trial of methods
for managing pain in primary care patients with or without cooccurring psychosocial problems. Ann Fam Med. 2006;4(4):341-350.
64. Reed J, Payton VR. Focus groups: issues of analysis and interpretation. J Adv Nurs. 1997;26(4):765-771.
338
VO L. 6, N O. 4
C R I T E R I A F O R Q U A L I TAT I V E R E S E A R C H
119. Dixon-Woods M, Shaw RL, Agarwal S, Smith JA. The problem of appraising qualitative research. Qual Saf Health Care.
2004;13(3):223-225.
121. Emden C, Sandelowski M. The good, the bad and the relative,
Part one: Conceptions of goodness in qualitative research. Int J
Nurs Pract. 1998;4(4):206-212.
122. Emden C, Sandelowski M. The good, the bad and the relative,
Part Two: Goodness and the criterion problem in qualitative
research. Int J Nurs Pract. 1999;5(1):2-7.
123. Harding G, Gantley M. Qualitative methods: beyond the cookbook. Fam Pract. 1998;15(1):76-79.
124. Johnson M, Long T, White A. Arguments for British Pluralism in
qualitative health research. J Adv Nurs. 2001;33(2):243-249.
125. Morse JM. Myth 93: Reliability and validity are not relevant to
qualitative inquiry. Qual Health Res. 1999;9(6):717-719.
126. Morse JM. considering the peer in peer review. Qual Health Res.
2002;12(5):579-580.
131. Peck E, Secker J. Quality criteria for qualitative research: Does context make a difference. Qual Health Res. 1999;9(4):552-558.
132. Poses RM, Isen AM. Qualitative research in medicine and health
care: questions and controversy. J Gen Intern Med. 1998;13(1):32-38.
107. Munhall P. Nursing Research: A Qualitative Perspective. 3rd ed. Boston, MA: Jones and Bartlett Publishers; 2001.
133. Poses RM, Levitt NJ. Qualitative research in health care. Antirealism is an excuse for sloppy work. BMJ. 2000;320(7251):1729-1730.
135. Rolfe G. Validity, trustworthiness and rigour: quality and the idea
of qualitative research. J Adv Nurs. 2006;53(3):304-310.
111. Avis M. Valid arguments? A consideration of the concept of validity in establishing the credibility of research findings. J Adv Nurs.
1995;22(6):1203-1209.
138. Ward MM. Study design in qualitative research: a guide to assessing quality. J Gen Intern Med. 1993;8(2):107-109.
114. Beck CT. Qualitative research: the evaluation of its credibility, fittingness, and auditability. West J Nurs Res. 1993;15(2):263-266.
116. Bryman A. Quantity and Quality in Social Research. Vol 18. New
York, NY: Routledge; 1988.
144. Lincoln Y. Emerging criteria for quality in qualitative and interpretive research [Keynote address]. Annual meeting, American Educational Research Association. San Francisco, CA; 1995.
118. Devers KJ. How will we know good qualitative research when we
see it? Beginning the dialogue in health services research. Health
Serv Res. 1999;34(5 Pt 2):1153-1188.
339
VO L. 6, N O. 4
CobenDJ,CrabtreeBF,2008,Evaluativecriteriaforqualitativeresearchinhealth
carecontroversiesandrecommendations.Annalsoffamilymedicine6(4):331339.
1.
2.
3.
(1) ethical
(2) research
(3)
(4) methods
(5)
(6) validitycredibility
(7) verificationreliability
verification
tacitknowledge
John W. Creswell
Dana L. Miller
Downloaded By: [Canadian Research Knowledge Network] At: 11:25 15 February 2009
Determining Validity
in Qualitative Inquiry
inquiry is challenging on many levels. Multiple perspectives about it flood the pages of books
(e.g., Lincoln & Guba, 1985; Maxwell, 1996; Merriam, 1998; Schwandt, 1997) and articles and chapters (e.g., Altheide & Johnson, 1994; Lather, 1993;
Maxwell, 1992). In these texts, readers are treated to
a confusing array of terms for validity, including authenticity, goodness, verisimilitude, adequacy, trustworthiness, plausibility, validity, validation, and
credibility. Various authors have constructed diverse
typologies of validity (e.g., Maxwells five types,
1992; Lathers four frames, 1993; and Schwandts
four positions, 1997). It is little wonder that Donmoyer (1996), who wrote an editorial on validity
in the Educational Researcher, commented on the
diverse perspectives of validity by contrasting Miles
and Hubermans (1994) traditional conception of
validity with Lathers (1993) ironic validity (p.
21). Novice researchers, in particular, can become
increasingly perplexed in attempting to understand
the notion of validity in qualitative inquiry.
There is a general consensus, however, that
qualitative inquirers need to demonstrate that their
studies are credible. To this end, several authors identify common procedures for establishing validity in
RITING ABOUT VALIDITY IN QUALITATIVE
Downloaded By: [Canadian Research Knowledge Network] At: 11:25 15 February 2009
Paradigm Assumptions
125
Downloaded By: [Canadian Research Knowledge Network] At: 11:25 15 February 2009
522). To this end, researchers engage in validity procedures of self-disclosure and collaboration with
participants in a study. These procedures help to
minimize further the inequality that participants
often feel. For example, Carspeckens Critical Ethnography in Educational Research (1996) reports
validity procedures for tracking bias and interviews
with oneself as ways for researchers to be situated
in a study.
Table 1
Validity Procedures Within Qualitative Lens and Paradigm Assumptions
Paradigm assumption/Lens
Postpositivist or
Systematic Paradigm
Constructivist
Paradigm
Critical Paradigm
Lens of the
Researcher
Triangulation
Disconfirming
evidence
Researcher
reflexivity
Lens of Study
Participants
Member checking
Collaboration
Thick, rich
description
Peer debriefing
126
Downloaded By: [Canadian Research Knowledge Network] At: 11:25 15 February 2009
Disconfirming evidence
A procedure closely related to triangulation
is the search by researchers for disconfirming or
negative evidence (Miles & Huberman, 1994). It
is the process where investigators first establish
the preliminary themes or categories in a study
and then search through the data for evidence that
is consistent with or disconfirms these themes. In
this process, researchers rely on their own lens,
and this represents a constructivist approach in that
it is less systematic than other procedures and relies on examining all of the multiple perspectives
on a theme or category.
In practice, the search for disconfirming evidence is a difficult process because researchers have
the proclivity to find confirming rather than disconfirming evidence. Further, the disconfirming
evidence should not outweigh the confirming evidence. As evidence for the validity of a narrative
account, however, this search for disconfirming
evidence provides further support of the accounts
credibility because reality, according to constructivists, is multiple and complex.
Member checking
With member checking, the validity procedure shifts from the researchers to participants in
the study. Lincoln and Guba (1985) describe member checks as the most crucial technique for establishing credibility (p. 314) in a study. It consists
of taking data and interpretations back to the participants in the study so that they can confirm the
credibility of the information and narrative account.
With the lens focused on participants, the researchers systematically check the data and the narrative
account.
Several procedures facilitate this process. A
popular strategy is to convene a focus group of
participants to review the findings. Alternatively,
researchers may have participants view the raw data
(e.g., transcriptions or observational field notes)
and comment on their accuracy. Throughout this
process, the researchers ask participants if the
themes or categories make sense, whether they are
developed with sufficient evidence, and whether
the overall account is realistic and accurate. In turn,
researchers incorporate participants comments into
the final narrative. In this way, the participants
add credibility to the qualitative study by having a
chance to react to both the data and the final narrative.
Researcher reflexivity
A third validity procedure is for researchers to
self-disclose their assumptions, beliefs, and biases.
This is the process whereby researchers report on
personal beliefs, values, and biases that may shape
their inquiry. It is particularly important for researchers to acknowledge and describe their enter-
127
Downloaded By: [Canadian Research Knowledge Network] At: 11:25 15 February 2009
128
Downloaded By: [Canadian Research Knowledge Network] At: 11:25 15 February 2009
Positioning Ourselves
Our approach is to use several validity procedures in our studies. Certainly some strategies
are easier to use than others, particularly those in-
References
Altheide, D.L., & Johnson, J.M. (1994). Criteria for
assessing interpretive validity in qualitative research. In N.K. Denzin & Y.S. Lincoln (Eds.),
Handbook of qualitative research (pp. 485-499).
Thousand Oaks, CA: Sage.
American Educational Research Association, American
Psychological Association, & National Council on
Measurement in Education. (1982). Standards for educational and psychological testing. Washington,
DC: American Educational Research Association.
Campbell, D.T., & Stanley, J.C. (1966). Experimental
and quasi-experimental designs for research. In
129
Downloaded By: [Canadian Research Knowledge Network] At: 11:25 15 February 2009
130
Lather, P. (1993). Fertile obsession: Validity after poststructuralism. The Sociological Quarterly, 34, 673-693.
Lincoln, Y.S., & Guba, E.G. (1985). Naturalistic inquiry.
Newbury Park, CA: Sage.
Maxwell, J.A. (1992). Understanding and validity in qualitative research. Harvard Educational Review, 62,
279-300.
Maxwell, J.A. (1996). Qualitative research design: An
interactive approach. Thousand Oaks, CA: Sage.
Merriam, S.B. (1998). Qualitative research and case study
applications in education. San Francisco: Jossey-Bass.
Miles, M.B., & Huberman, A.M. (1994). Qualitative data
analysis: An expanded sourcebook (2nd ed.). Newbury Park, CA: Sage.
Miller, D.L., Creswell, J. W., & Olander, L.S. (1998).
Writing and retelling multiple ethnographic tales of
a soup kitchen for the homeless. Qualitative Inquiry,
4, 469-491.
Moustakas, C. (1994). Phenomenological research methods. Thousand Oaks, CA: Sage.
Patton, M.Q. (1980). Qualitative evaluation methods.
Newbury Park, CA: Sage.
Ratcliffe, J.W. (1983). Notions of validity in qualitative
research methodology. Knowledge: Creation, Diffusion, Utilization, 5(2), 147-167.
Richardson, L. (1994). Writing: A method of inquiry. In
N.K. Denzin & Y.S. Lincoln (Eds.), Handbook of qualitative research (pp. 516-529). Thousand Oaks, CA:
Sage.
Schwandt, T.A. (1997). Qualitative inquiry: A dictionary
of terms. Thousand Oaks, CA: Sage.
Schwandt, T.A., & Halpern, E.S. (1988). Linking auditing and metaevaluation: Enhancing quality in applied research. Newbury Park, CA: Sage.