Vous êtes sur la page 1sur 12

TRIANGULATED RESEARCH DESIGNS -- A JUSTIFICATION?

Author(s): Gayle M. Rhineberger, David J. Hartmann and Thomas L. Van Valey


Source: Journal of Applied Sociology, Vol. 22, No. 1, Special Joint Issue with "Sociological
Practice" (Spring 2005), pp. 56-66
Published by: Sage Publications, Inc.
Stable URL: http://www.jstor.org/stable/43736102
Accessed: 01-08-2016 05:57 UTC
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
http://about.jstor.org/terms

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted
digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about
JSTOR, please contact support@jstor.org.

Sage Publications, Inc. is collaborating with JSTOR to digitize, preserve and extend access to Journal of
Applied Sociology

This content downloaded from 202.43.95.117 on Mon, 01 Aug 2016 05:57:42 UTC
All use subject to http://about.jstor.org/terms

TRIANGULATED RESEARCH DESIGNS - A


JUSTIFICATION?

Gayle M. Rhineberger
University of Northern Iowa
David J. Hartmann

Western Michigan University

Thomas L. Van Valey


Western Michigan University
ABSTRACT

The use of triangulated research designs is becoming increasingly popular,


particularly in applied sociology and evaluation research. There is a substantial
amount of literature on triangulated research methods, particularly in the fields of

social research methods and nursing. This paper examines the uses of the concept
of triangulation in applied sociological research. It does so first by reviewing uses
of the term in various applied contexts. We then turn to whether and how the
information derived from multiple methods is actually integrated by the applied
researcher. Finally, we discuss the importance of triangulation for the quality of
work in the field of applied sociology.

According to Creswell (1994), the term "triangulation" was first used by


Denzin ( 1 970a) to advocate the use of multiple methods in a single study. Creswell

is certainly not correct, for Denzin's own reader, published in the same year,
includes an article from 1966 by Webb that used the term. Webb, in turn, in turn,

cited several others, including a 1958 article by Feigl. Webb's presentation of


triangulation was quite consistent with more recent versions. For example he
pointed out that,

Every data-gathering class - interviews, questionnaires, observation,


performance records, physical evidence - is potentially biased and has specific
to it certain validity threats. Ideally, we should like to converge data from
several data classes, as well as converge with multiple variants from within a

single class (Webb, 1970: 450).


Journal of Applied Sociology/Sociological Practice, Vol. 22 No. 1/ Vol. 7 No. 1, Spring
2005: 56-66. Society for Applied Sociology and the Sociological Practice Association

56

This content downloaded from 202.43.95.117 on Mon, 01 Aug 2016 05:57:42 UTC
All use subject to http://about.jstor.org/terms

Rhineberger, Hartmann, Van Valey: Triangulated Designs

There is no question that Denzin's reader and methods book brought focus and
insight to the discussion of the concept of triangulation in sociology. That was a
significant advance. Without doing justice to the range of that discussion, we may
indicate what, for Denzin, is the purpose of methodological triangulation.1 It is to
accommodate biases that are implicit in different data sources or methods, such
that bias in one source or method would be reduced or even eliminated by the use
of another data source or method (1970b: 471; Creswell, 1994, citing Jick, 1979).
Caracelli and Greene ( 1 997 :22) further suggest that triangulation designs are those

in which "different methods are used to assess the same phenomenon toward
convergence and increased validity." This logic suggests that combinations of
methods actually gain strength when the components are somewhat dissimilar.
This is often the rationale used for the increasingly common advice to combine
qualitative and quantitative methods, for example.
TRIANGULATION AS VALIDATION AND EXTENSION

A fundamental underlying point is that different approaches are more likely to

offset bias only when the nature of the biases is clearly understood. Similarly
biased methods, for example, might actually reinforce the misleading (biased)
component of results as much as or more than a single method. The language and
logic of bias reduction and compensations in pursuit of validity are, of course, most

consistent with the realist ontologies underlying positivist social science. Despite
the protests of Guba and Lincoln (1989) and others however, the pursuit of better
(e.g., more "credible") understanding in interpretivist and constructionist work is
also reasonable through triangulation. One substitutes credible constructions for
valid results as the goal (of triangulation). Indeed, it is quite straightforward (see

Curtis and Petras, 1970: Introduction) to conceive of triangulation as an


intersubjective basis rather than an absolute one.
Again, therefore, methods should not only be different enough to provide real

difference in perspective but must also be of quality in themselves. Finally, these

properties of quality must be presentable in metrics that reach across the


methods/results (we would then say that the results are "commensurable") to
support the productive combination of findings. How seriously we take this notion
of common metrics may largely determine the utility of the use of multiple
methods in applied sociology.

One extreme is certainly provided by the physical science model of


triangulation that has served as the guiding metaphor for this entire enterprise.
Recall that triangulation, as an approach to the measurement of distance (in any

57

This content downloaded from 202.43.95.117 on Mon, 01 Aug 2016 05:57:42 UTC
All use subject to http://about.jstor.org/terms

Journal of Applied Sociology/Sociological Practice 22, 1/ 7, 1


practical situation, from the surveying of a field to the estimation of astral distances
in astronomy) , is not a combination of methods at all. Rather, it is simply the same

approach to measurement made from two different points along the baseline. In
such situations, neither measurement has merit by itself. It is the method of

combining them that produces a useful result - that of "triangulation" in the

measurement of distance or "trigonometric parallax" as it is called in the


calculation of the distance to a star. Moreover, the precise and reproducible
measurement supported by that application of trigonometry may not be as clearly

applicable in the realm of social science where different measurements of


somewhat lesser and unknown quality are typically used.

Often it is not even clear what it means to have a common metric or scale

across different methods, like interviews and document analysis for example. It is
apparent that Webb and his colleagues at Northwestern were engaged in a largely

quantitative exploration of compensating biases through careful selection of


methods in the late 1960s (Webb, 1966; Webb et al., 1966). However, this thread
seems to have become less clear and traceable in more recent literature.

Certainly, simple injunctions over the past two decades that using different
methods increases the validity of our research findings do not tell the whole story.

Lincoln and Guba (1985) and Caudle (1994) present a similar understanding of
triangulation as validity enhancement but use somewhat different terms. They
discuss triangulation as a technique for "verifying" information that is obtained
with other methods. Lincoln and Guba (1985) also stress that triangulation is

crucial in qualitative (naturalistic) approaches. They assert that when new


information is uncovered, different methods should then be used to validate that

information. They further state, "No single item of information (unless coming

from an elite and unimpeachable source) should ever be given serious


consideration unless it can be triangulated" (Lincoln and Guba, 1985:283).
This verification rationale is really a derivative of the validity argument

discussed above. For what is added by verification with a second method is


increased confidence in an underlying commonality of result. Moreover, this could
not be obtained from repeated applications of the first method even if reliability of
that method were not at issue. We may record highly reliable findings on repeated
measures of a single method but still retain serious concern that what has been

measured is either a partial or misleading view of what is important. If the


measures from one or more methods have reliability problems as well, the
challenge of recognizing important congruence in the results is compounded.
Caudle (1994) makes a parallel argument in claiming that triangulation can be
used to increase the "credibility" of qualitative designs in particular. According to

Caudle (1994: 85), "Qualitative evaluation measurements generally are very

58

This content downloaded from 202.43.95.117 on Mon, 01 Aug 2016 05:57:42 UTC
All use subject to http://about.jstor.org/terms

Rhineberger, Hartmann, Van Valey: Triangulated Designs


personal and reflect the evaluator's perceptions, values, and professional training."

Thus, she recommends the use of triangulation as a means of improving the


credibility of qualitative research. She defines triangulation even more broadly, as
. .the combining of methods, data sources, and other factors in examining what is

under study" (Caudle, 1994: 89). She continues with a statement about what the
goal of triangulation should be.
Congruence and/or complementarity of results from each method is the goal of
triangulation. Congruence is defined as similarity, consistency, or convergence
of results, while complementarity refers to one set of results expanding upon,
clarifying, or illustrating the other. If done properly, triangulation should rely
on independent assessments with offsetting kinds of bias and measurement

errors (Caudle, 1994: 89-90).


In this view, using triangulation in research with qualitative results may therefore

improve the qualitative proxies for validity and reliability, but only if several
different methods yield similar and consistent results (congruency).
Of course, the congruence of results from multiple methods does not establish
that more credible understanding is achieved, just as it does not establish that
validity is actually enhanced. Simply, if the question is the credibility or validity of

a result, identifying another congruent result does not prove anything. In both
cases, one requires outside support in the form of theory. Fortunately, that is
unavoidable; and the fact that theory is interactive with, rather than strictly outside
of, empirical investigation is not fatal to a pragmatic standard of theory (following

Rorty, 1989). For example, Blakie (1991 : 23) seems to agree with this stance but

goes too far in expressing concern that this cheapens social science results,
rendering them a "matter of judgment." But using judgment to make sense of data

is hardly a condemnation of social science - or physical science for that matter.


Absent an unambiguous formulaic protocol for combining the results of two
methods, one may still combine them productively.
We have argued that the terms "verification" and "credibility" are forms of the

validity argument for triangulation. Nevertheless, the terms are not strictly
synonymous and point out again the imprecision with which calls for triangulation
are made. If we return to the root metaphor, that of triangulation in the calculation

of distance, we note that the second measurement does not in any way verify or
make more credible the first. Indeed, any error in either measurement completely
invalidates the end result. One might argue that in some sense the two perspectives
(positions of measurement) adjust for the inadequacy of either one alone. However,
this is the looser spirit with which triangulation is often used in social science.

59

This content downloaded from 202.43.95.117 on Mon, 01 Aug 2016 05:57:42 UTC
All use subject to http://about.jstor.org/terms

Journal of Applied Sociology/Sociological Practice 22, 1/ 7, 1

Perhaps a better physical science metaphor is found in the wonderful


description James Gleick uses to open his book, Faster (1999). Gleick reflects on a
visit to the Directorate of Time, a division of the U.S. Department of Defense.
Since the telling of official time is too important to be left to a single clock, no
matter how advanced, a worldwide network of clocks combines to do the job.
Gleick reports, "Out-of-sync clocks reveal themselves quickly." The director of the

operations gives an analogy: "It's like a court of law, where you have many
slightly different stories and one widely different story." Gleick (1999:4)
continues, "...the plausible witnesses are chosen and assembled, their output is
statistically merged. . . the result is exact time." This is a fairly clear model for the

congruence standard we saw to be so central in much of the discussion of


triangulation in social science.
Science, even social science, is supposed to work in this iterative way. We
observe something as best we can. We then observe it again and again in different
contexts, perhaps with slightly different tools or protocol, and repeat the iterative

expansion of data collection as much as possible. The results are then compared
and a conclusion is drawn. So triangulation in the validity/verification/credibility
sense is after all the essence of the cumulative congruence that lies at the heart of
the scientific method as it has been understood since the Enlightenment.
With this as our model, the call for triangulation becomes clearer, but its
limitations also come into clearer focus. We cannot allow just any old clock into
our network. Nor can we expand the network indefinitely. Leaving aside the overly
quantitative and statistical flavor of the metaphor, what is needed are standards - if

not a Directorate of Triangulation - that must lie at the heart of this necessary
endeavor. What does it mean for results from different methods to agree? Perhaps

more importantly, what does it mean if they disagree, and in that case, which, or
how much of which gets thrown out?
There is a final use of the term "triangulation" in the social science literature -

that of multiple-methods used to produce new and alternative information rather

than to validate or affirm existing information. Many applied researchers,


including Steele, Hauser, and Hauser (1999), Yin (1998), Patton (1986), and
Hedrick et al. (1993), describe triangulation as a combination of multiple research
methods used to obtain more or less independent information from different
perspectives. Whereas this could be construed simply as another way of increasing
validity, it appears to be a means to broaden research agendas and extend the scope
of the data obtained. In their discussion of the issues surrounding the conduct of
research in an organization, Steele, Hauser, and Hauser. (1999: 68-69) suggest that
individual research techniques,

... may be limiting in that they focus on only one area or view of the

60

This content downloaded from 202.43.95.117 on Mon, 01 Aug 2016 05:57:42 UTC
All use subject to http://about.jstor.org/terms

Rhineberger, Hartmann, Van Valey: Triangulated Designs


organization. For this reason, it is best that the researcher use a combination of

techniques whenever possible. This process is called triangulation and, as the


name suggests, it presents a number of different views of the phenomenon in
order to get a more accurate picture of it.

They further suggest that such triangulation is desirable particularly when


studying other cultures, or when culture is a key variable in the study. They
indicate that using only one method will not provide enough information to allow
the researcher to obtain an accurate picture of that culture.

In addition to the study of cultures, Steele, Hauser, and Hauser (1999)


advocate triangulation for studying social change processes, as well as case study
research. Along similar lines, Yin (1998) suggests that case studies require the

researcher to use several different data collection methods in order to obtain the

depth and breadth of information needed to understand fully the phenomenon


under study. When teased out, this view of triangulation as additional information
is clearly distinct from the validation idea and probably deserves a different name.

The distinction among the various views of triangulation is most easily seen in

what we have taken to be the central practical tension in such calls for
triangulation, that is, how seriously one is concerned about integrating the results
of the different methods used. In validation, the concern is with the quality of each
result, and therefore with the ability of additional methods to produce findings that
address deficiencies inherent in other methods. The various findings must therefore

be commensurable - that is, there must be a common scale along which they can
be measured so that what is worthwhile may be discerned and integrated. In the
alternative view of triangulation, rooted in complementarity or the need for
additional information, there is less interest in the quality of each measure, and
there is less emphasis on integration. The notion is rather akin to that of heaping
stones to form a wall. Each stone is self-contained and is not improved by the
collection and aggregation of others. The wall, of course, is literally another thing,

and its quality depends precisely on the care with which it is put together. Still
some walls are really no more than heaps of stones.

For example, let us say, following Steele, that we wish to understand an


organization or a culture. One method gives us insight to a part of that complex.

Another method gives insight to another part, and so on. No critique of the
individual methods and results is necessary. It seems quite reasonable that a better
(perhaps "fuller" is more appropriate) understanding is produced in this way, but
only because our focus was on the larger entity. The issue of commensurability, for

example, is avoided. In contrast, with respect to validation, we must separate the

61

This content downloaded from 202.43.95.117 on Mon, 01 Aug 2016 05:57:42 UTC
All use subject to http://about.jstor.org/terms

Journal of Applied Sociology/Sociological Practice 22, 1/ 7, 1

valid from the invalid in every finding, and in a way that allows findings to be
combined. Metrics - dimensions and scales - matter. In aggregation, additional
information is an end in itself.

In sum, triangulation is an imprecise term applied in a variety of ways to the


use of multiple methods to improve the quality of understanding. What is meant by

improved quality includes at least two things: 1) enhancing the quality of what are
thought to be the potentially flawed or biased results of single methods through

integration, and 2) enhancing the range or amount of information through the


aggregation of methods. Finally, it is certainly common in both the practice and the
teaching of social science methods to blur these distinctions. The remainder of this
paper explores the utility of these distinctions.
IMPLICATIONS

The most obvious implication of this discussion is that calls for triangulation
are most powerful when they are made in terms of the congruence of results and of

correcting bias through the integration of measures. Indeed, they should be


explicit enough to justify the increased resources required by adding methods.
Furthermore, the payoff must be documented. This, in turn requires that standards

of commensurability be built into the various methods. It does us no good to


conduct interviews and also to do text analysis, for example, if we have not
thought about what congruent and incongruent findings would look like. We must

remember, following Denzin, that there are likely to be differences among data
sources and investigators as well as among the methods involved. Triangulation
requires attention to the commonality that exists along all these dimensions.
Moreover, it must be done in advance. It is far too tempting to find congruence
after the fact, especially because we are likely to be predisposed in that direction
by the decision to include multiple methods. Tighter standards are only possible
beforehand.

It is telling that recent discussions of commensurability have tended to focus


on more underlying and abstract notions than operationalization and measurement.

Concerns about commensurability in ontological and epistemological terms are no


doubt important (see Greene and Caracelli, 1997). But they are not substitutes for
greater specification of the commensurability that may be present at more concrete
levels.
CONCLUSION

Regardless of how we conceptualize triangulation or our reasons for using it,


we have to decide what to do with the data we get from this process. According to
the existing literature, triangulation, by any definition, is of value in the collection

62

This content downloaded from 202.43.95.117 on Mon, 01 Aug 2016 05:57:42 UTC
All use subject to http://about.jstor.org/terms

Rhineberger, Hartmann, Van Valey: Triangulated Designs


of valid and reliable data. Its use will strengthen our research and result in better,
more usable recommendations. However, there is virtually no mention made of
how to organize and present the results of the research when triangulation is used.
Furthermore, although some authors believe that triangulation can be used to verify
information obtained from other sources, they provide little advice for the all-too-

common situation in which multiple methods produce not consistent but


conflicting findings.
The missing piece of the puzzle about triangulation is a discussion of how to
integrate the findings from a triangulated research project. This becomes an even
more difficult endeavor when the methods produce results in different scales of
measurement (i.e., they are technically incommensurable) as is typically the case in
the mix of qualitative and quantitative data sources. Patton (1986) acknowledges
that the use of multiple methods, especially when quantitative and qualitative are

used together, will be challenging. Evaluators will have to ensure that they do not

privilege some data over others, and that the merits of all data should be
considered. What he seems to be implying is that when using multiple methods,

qualitative methods should not be perceived as less useful and less valid in
comparison to quantitative data, simply because quantitative data are presumed to
be more accurate and reliable. Unfortunately, he offers little in the way of strategy

for achieving this goal.


One possible approach to combination, of course, is a retreat from explicit
standards of comparison and combination. In the spirit of what might be called

"methodological pluralism," all results are created equal as a matter of faith.


Combinations of results, therefore, tend to take on aspects of the aggregation
model (the heaping of stones) rather than the validation model with its reliance on

statistical combinations according to complex formulas. Such an approach


potentially offers a broader understanding of the phenomenon under study. But it
would certainly be subject to researcher and researcher/client biases regarding the
various forms of data when the results are inconsistent. The concern, of course, is

that inaccurate conclusions could be drawn and inappropriate recommendations


could be made.

The baseline example, the use of multiple operationalizations within a


particular method is, indeed, a form of triangulation but is less controversial
precisely because issues of commensurability tend to be less serious. That such
issues are generally present nevertheless is not surprising to anyone who has tried
to combine different scales of self-esteem or locus of control or sense of coherence.

Nor, for that matter, should it surprise those who have attempted to understand
what is common and different in a locus-of-control measure and a sense of

63

This content downloaded from 202.43.95.117 on Mon, 01 Aug 2016 05:57:42 UTC
All use subject to http://about.jstor.org/terms

Journal of Applied Sociology/Sociological Practice 22, 1/ 7, 1


coherence measure. Is divergence of results the effect of different bias and error or

of differences in what is measured? Regardless, in such situations one can often


rely on some comparability in metric and even use quantitative methods like factor

analysis and discriminant function analysis to combine and contrast different


combinations (see Hartmann, 2001). Tests of reliability and common variance are
potentially available. Cross-cultural comparative methods are of an intermediate
order of complexity. In this case, the translation of an instrument, for example,
involves language as well as conceptual or context translations. Some of these,
again, are at least potentially testable (see Kohn et al., 1980). But when one turns
to results from multiple methods - between-method triangulation - the comparison

and contrast become fuzzy indeed.


We joked above about a Directorate of Triangulation to mimic the Directorate

of Time (itself, supposedly, a metaphor for an overly quantitative artificial


bureaucracy). Obviously, we are not proposing any such entity, or a body of
members that would be given the decision-making power. Apart from such metastandards, however, we are proposing that any study employing multiple methods
must state its own explicit expectations about how the findings from those methods

will be compared, evaluated, and combined.


NOTE

1. As distinguished from theoretical, data, and investigator triangulation

(Denzin 1970b: 472).


REFERENCES

Blakie, Norman W. H. 1991. "A Critique of the Use of Triangulation in Social


Research." Quality and Quantity 25: 1 15-36.

Caracelli, Valerie J. and Jennifer C. Greene. 1997. "Crafting Mixed-Method


Evaluation Designs." New Directions for Evaluation 74: 19-32.
Caudle, Sharon L. 1994. "Using Qualitative Approaches." In Joseph S. Wholey,

Harry P. Hatry, and Kathryn E. Newcomer, eds., Handbook of Practical


Program Evaluation. San Francisco: Jossey-Bass (pp. 69-95).

Creswell, John W. 1994. Research Design: Qualitative and Quantitative


Approaches. Thousand Oaks, CA: Sage.

64

This content downloaded from 202.43.95.117 on Mon, 01 Aug 2016 05:57:42 UTC
All use subject to http://about.jstor.org/terms

Rhineberger, Hartmann, Van Valey: Triangulated Designs

Curtis, James E. and John W. Petras. 1970. " Introduction.," The Sociology of

Knowledge: A Reader. London: Praeger.


Denzin, Norman K. 1970a. The Research Act. Chicago: Aldine.

Denzin, Norman K. 1970a. Sociological Methods: A Sourcebook. Chicago:


Aldine.

Gleick, James. 1999. Faster. New York: Pantheon Books.


Guba, Egon and Yvonna Lincoln. 1989. Fourth Generation Evaluation. Newbury

Park, CA: Sage.

Guba, Egon and Yvonna Lincoln. 1994. "Competing Paradigms in Qualitative


Research." In Norman K. Denzin and Yvonna S. Lincoln, eds., Handbook of

Qualitative Research, Thousand Oaks, CA: Sage (Pp. 105-117).


Hartmann, David J. 2001. "Replication and Extension Analyzing the Factor
Structure of Locus of Control Scales for Substance Abusing Behaviors."
Psychological Reports 84:277-87.
Hedrick, Terry E., Leonard Bickman, and Debra J. Rog. 1993. Applied Research
Design: A Practical Guide. Newbury Park, CA: Sage.
Jick, T. D. 1979. "Mixing Qualitative and Quantitative Methods: Triangulation in
Action." Administrative Science Quarterly 24: 602-61 1.

Lincoln, Yvonna S. and Egon Guba. 1985. Naturalistic Inquiry. Beverly Hills:
Sage.

Kohn, Melvin, L., Atushi Naoi, Carrie Schoenbach, Carmi Schooler, and

Kazimierz M. Somczyski. 1990. "Position in Class Structure and


Psychological Functioning in the United States, Japan, and Poland." American
Journal of Sociology 95 : 964- 1 008 .

Patton, Michael Quinn. 1986. Utilization-Focused Evaluation. Beverly Hills:


Sage.

65

This content downloaded from 202.43.95.117 on Mon, 01 Aug 2016 05:57:42 UTC
All use subject to http://about.jstor.org/terms

Journal of Applied Sociology/Sociological Practice 22, 1/ 7, 1

Rorty, Richard. 1982. "Method, Social Science, and Social Hope." In


Consequences of Pragmatism. Minneapolis: University of Minnesota Press

(pp. 191-210).
Shotland, R. Lance and Melvin M. Mark. 1987. Improving Inferences from
Multiple Methods. New Directions for Program Evaluation 35: 77-94.
Steele, Stephen F.; Anne Marie Scarisbrick-Hauser; and William J. Hauser. 1999.

Solution-Centered Sociology: Addressing Problems through Applied

Sociology. Thousand Oaks, CA: Sage.


Webb, Eugene J. 1966. "Unconventionality, Triangulation, and Inference."
Proceedings of the Invitational Conference on Testing Problems, October 29

(pp. 34-43). Reprinted in Norman K. Denzin, ed, Sociological Methods: A


Sourcebook, 1970 Chicago: Aldine, (pp. 191-210)
Webb, E.J., D.T. Campbell, R.D. Schwartz, and L. Sechrest. 1966. Unobtrusive
Measures. Chicago: Rand McNally.
Yin, Robert K. 1998. "The Abridged Version of Case Study Research: Design and
Method." In Joseph S. Wholey, Harry P. Hatry, and Kathryn E. Newcomer,
eds., Handbook of Practical Program Evaluation. San Francisco: Jossey-Bass

(Pp. 229-260).

66

This content downloaded from 202.43.95.117 on Mon, 01 Aug 2016 05:57:42 UTC
All use subject to http://about.jstor.org/terms

Vous aimerez peut-être aussi