Vous êtes sur la page 1sur 6

Educational Action Research

ISSN: 0965-0792 (Print) 1747-5074 (Online) Journal homepage: http://www.tandfonline.com/loi/reac20

Error, bias and validity in qualitative research

Nigel Norris

To cite this article: Nigel Norris (1997) Error, bias and validity in qualitative research, Educational
Action Research, 5:1, 172-176

To link to this article: https://doi.org/10.1080/09650799700200020

Published online: 03 Sep 2007.

Submit your article to this journal

Article views: 30111

View related articles

Citing articles: 32 View citing articles

Full Terms & Conditions of access and use can be found at


http://www.tandfonline.com/action/journalInformation?journalCode=reac20

Download by: [Birla Institute of Technology & Science Pilani] Date: 04 December 2017, At: 09:51
Educational Action Research, Volume 5, No. 1, 1997

THEORETICAL RESOURCES
Downloaded by [Birla Institute of Technology & Science Pilani] at 09:51 04 December 2017

Error, Bias and Validity


in Qualitative Research

NIGEL NORRIS
University of East Anglia, Norwich, United Kingdom

At its most rudimentary, validity refers to the reasons we have for believing
truth claims, what Dewey called “warranted assertibility” (Phillips, 1987).
These truth claims may take the form of statements of fact, descriptions,
accounts, propositions, generalisations, inferences, interpretations,
judgements or arguments. Irrespective of their form what is important is
why we believe the things that we do and how we justify the claims we make.

Attempts to apply the concepts of validity associated with quantitative


research to the design and practice of qualitative research (LeCompte &
Goetz, 1982; Evans, 1983; Yin, 1984) have yet to show their operational
relevance to case study, ethnography or naturalistic inquiry. Part of the
problem concerns a failure to recognise that the nature of naturalistic
inquiry is markedly different from experimental design. This does not mean
that concepts of validity are inapplicable or nonsensical, but it does mean
that we have to re-appraise their meaning and use.
Joseph Maxwell (1992) developed a typology for categorising forms of
understanding and corresponding types of validity that are more relevant to
qualitative research. His account of validity in qualitative research is, at
least in part, an attempt to uncover ‘theory-in-use’. He distinguishes five
types of validity: descriptive validity, interpretive validity, theoretical validity,
generalisability and evaluative validity.[1]
Maxwell notes that in experimental research threats to validity are
“addressed in an anonymous, generic fashion by prior design features such
as randomization and controls”. By contrast in qualitative research the prior
“elimination of threats is less possible” (p. 296). In delineating the different
kinds of understanding and the types of validity associated with them he is

172
THEORETICAL RESOURCES

trying to establish a framework for thinking about the threats to validity in


qualitative research.
Most of the conventional constructs of validity are inappropriate for
naturalistic forms of inquiry. It is also possible that they are inappropriate
for the actual, as opposed to idealised, way that all forms of social and
educational research are done. Which is to say that they don’t represent the
actual validating practices of researchers.
A number of writers have questioned both the appropriateness of
conventional constructs of validity and the ideology that sustains them.
Downloaded by [Birla Institute of Technology & Science Pilani] at 09:51 04 December 2017

Lather (1993, p. 683), for instance, notes the “crisis of authority” in all
knowledge systems and the “discourses of validity that appear no longer
adequate to the task.” Scheurich (1994, p. 11) writes about regimes of truth
and the imperialism of validity; its prescriptive nature, demarcating
acceptable from unacceptable research – “Validity is the determination of
whether the Other has been acceptably converted into the Same, according
to a particular epistemology.” Although such challenges to the conventional
discipline of research are provocative and offer a much needed corrective to
faith in technique and technical discourse, they don’t help a great deal with
what to do.
One practical way to think about the issue of validity is to focus on
error and bias. Research whether quantitative or qualitative, experimental or
naturalistic, is a human activity subject to the same kinds of failings as
other human activities. Researchers are fallible. They make mistakes and get
things wrong. There is no paradigm solution to the elimination of error and
bias. Different forms of research may be prone to different sources of error,
but clearly none are immune. Although there are methodological strategies
for handling validity (Miles & Huberman, 1984; Elliott, 1990; Phelan &
Reynolds, 1996), less consideration has been given to researcher bias and to
the personal and social strategies needed to address it.
Simultaneously research demands scepticism, commitment and
detachment. To understand the object or domain of inquiry takes an intense
degree of commitment and concentration. To remain open minded, alert to
foreclosure and to sources of error needs some measure of detachment. As
with other forms of art, research requires detachment from oneself, a
willingness to look at the self and the way it influences the quality of data
and reports; in particular research demands a capacity to accept and use
criticism, and to be self-critical in a constructive manner.
All research has to start somewhere. Researchers have to take some
things for granted; to act they must accept much of the world as given. They
also need to be able to review these presuppositions in the light of experience
or to imagine the world differently so as to maintain their scepticism.
Some of our presuppositions are, in a sense, paradigmatic. They
represent our preferred ways of solving research problems; preferences that
are often as much to do with personal strengths and weaknesses as they are
to do with determining the best match between methodology and problem.
However, preferences can be challenged and the limitations of particular
research designs or strategies acknowledged. Others can be involved in the

173
THEORETICAL RESOURCES

research process, as critical friends, as colleagues and participants to


broaden the scope of perception and experience. While there may not be a
paradigmatic solution to error and bias, there are certainly things that can
be done.
It is not difficult to label a whole range of potential sources of bias in
research. For example:
x the reactivity of researchers with the providers and consumers of
information;
Downloaded by [Birla Institute of Technology & Science Pilani] at 09:51 04 December 2017

x selection biases including the sampling of times, places, events, people,


issues, questions and the balance between the dramatic and the
mundane;
x the availability and reliability of various sources or kinds of data, either in
general or their availability to different researchers;
x the affinity of researchers with certain kinds of people, designs, data,
theories, concepts, explanations;
x the ability of researchers, including their knowledge, skills,
methodological strengths, capacity for imagination;
x the value preferences and commitments of researchers and their
knowledge or otherwise of these;
x the personal qualities of researchers, including, for example, their capacity
for concentration and patience; tolerance of boredom and ambiguity; their
need for resolution, conclusion and certainty.
The problem is that while it is easy to label potential sources of bias it is not
possible to construct rules for judging the validity of particular studies or
domains of inquiry. Nor is it possible to specify procedures which if followed
will systematically eliminate bias and error. We need, therefore, to think of
the social processes that might keep research honest and fair and enhance
its quality.
A consideration of self as a researcher and self in relation to the topic
of research is a precondition for coping with bias. How this can be realised
varies from individual to individual. For some, it involves a deliberate effort
at voicing their prejudices and assumptions so that they can be considered
openly and challenged. For others, it happens through introspection and
analysis. The task, if you like, is seeing what frames our interpretations of
the world.
Such efforts do not have to rely only on self-criticism and judgement.
Data can be reviewed by others to indicate something of the personal style of
the researcher. The views of participants in the research can be elicited to
learn how they see the researcher, the process of research and the accounts
it has generated (participant validation). Their judgements as to the quality
of research can be used to improve both methodological and substantive
accounts.
Accounts can be analysed for the implicit assumptions about
significance and style. Critical friends and colleagues can help the
researcher explore their preferences for certain kinds of evidence,
interpretations and explanations and consider alternatives, locate blind

174
THEORETICAL RESOURCES

spots and omissions, assess sampling procedures to highlight selection


biases, examine judgements and make the processes of research more
public.
Validity enhancing practices do not ensure that research is accurate,
correct, certain, trustworthy, objective or any of the other surrogates we use
for truth. There are no guarantees, no bedrock from which verities can be
derived. It is in the nature of research that knowledge can always be revised.
In the words of Patti Lather (1993, p. 697) validity is “multiple, partial,
endlessly deferred”. This does not mean, however, that anything goes.
Downloaded by [Birla Institute of Technology & Science Pilani] at 09:51 04 December 2017

Correspondence
Nigel Norris, Centre for Applied Research in Education, School of Education,
University of East Anglia, Norwich NR4 7TJ, United Kingdom.

Note
[1] Descriptive validity refers to the factual accuracy of accounts and all other
categories of validity depend in some way on this primary aspect of validity.
Descriptive validity concerns acts, physical and behavioural events, things that
are in principle observable or can be apprehended. “Descriptive understanding”,
says Maxwell, “ pertains ... to matters for which we have a framework for
resolving ... disagreements” (p. 287). Interpretive validity concerns the intentions,
beliefs, thoughts, feelings, understandings of the people whose lives are
represented in an account. “Interpretive accounts are grounded in the language
of the people studied and rely as much as possible on their own words and
concepts” (p. 289). Unlike descriptive validity for interpretive validity there is no
“in principle access to data that would unequivocally address threats to validity”
(p. 290). “Accounts of participants’ meanings are never a matter of direct access,
but are always constructed by the researcher(s) on the basis of the participants’
accounts and other evidence” (p. 290). Like descriptive validity, interpretive
validity depends on consensus with the relevant community and the concepts
and terms used are close to experience. Theoretical validity refers “to an accounts
validity as a theory of some phenomenon” (p. 291). There are two aspects to
theoretical validity; the validity of the concepts and categories applied to the
phenomena, and the validity of the postulated relations among the concepts.
“The first of these aspects of theoretical validity closely matches what is generally
known as construct validity ... The second aspects includes, but is not limited to,
what is commonly called internal or causal validity, Cook & Campbell, 1979,
p. 291. Theoretical validity is concerned with problems that do not disappear
with agreements on the “facts” of the situation. Generalisability “refers to the
extent to which one can extend the account of a particular situation or
population to other persons, times or settings than those directly studied” (p.
293). In qualitative research generalisation is usually premised on the
assumption that an account or theory may be useful in making sense of similar
persons or situations. It is a process that depends on comparison. In qualitative
research there are two aspects to generalisability: (i) generalising with the
community studied to persons, events and settings that were not directly
encountered, and generalising to other communities. Evaluative validity raises
issues about the application of ethical or moral frameworks and judgements in
an account.

175
THEORETICAL RESOURCES

References
Cook, T. & Campbell, D. (1979) Quasi-experimentation: design and analysis issues for
field settings. Boston: Houghton Mifflin.
Elliott, J. (1990) Validating case studies, Westminster Studies in Education, 13,
pp. 47-60.
Evans, J. (1983) Criteria of Validity in Social Research: exploring the relationship
between ethnographic and quantitative approaches, in M. Hammersley (Ed.)
Downloaded by [Birla Institute of Technology & Science Pilani] at 09:51 04 December 2017

The Ethnography of Schooling. Driffield: Nafferton Books.


Lather, P. (1993) Fertile obsession: validity after poststructuralism, Sociological
Quarterly, 34, pp. 673-693.
LeCompte, M. & Goetz, J. (1982) Problems of reliability and validity in ethnographic
research, Review of Educational Research, 52, pp. 31-60.
Maxwell, J. (1992) Understanding and validity in qualitative research, Harvard
Educational Review, 62, pp. 279-300.
Miles, M. & Huberman, M. (1984) Qualitative Data Analysis. London: Sage.
Phelan, P & Reynolds P. (1996) Evidence and Argument: critical analysis for the social
sciences. London: Routledge.
Phillips, D.C. (1987) Validity in qualitative research: why the worry about warrant will
not wane, Education and Urban Society, 20, pp. 9-24.
Scheurich, J.J. (1994) The masks of validity: a deconstructive interrogation. Paper
presented to the AERA annual meeting, New Orleans. Mimeo, Department of
Educational Administration, University of Texas, Austin.
Scriven, M. (1981) Evaluation Thesaurus, 3rd edn. Inverness: Edgepress.
Yin, R.K. (1984) Case Study Research: designs and methods. London: Sage.

176

Vous aimerez peut-être aussi