Vous êtes sur la page 1sur 23

effective use of mixed-methods evaluation designs employing quantitative and

qualitative methods requires clarification of important design and analysis issues.

Design needs include assessments of the relative costs and benefits of alternative mixedmethods designs, which can be differentiated by the independence of the different
methods and their sequential or concurrent implementation. The evaluation reported
herein illustrates an independent, concurrent mixed-method design and highlights its
significant triangulation benefits. Strategies for analyzing quantitative and qualitative
results are further needed. Underlying this analysis challenge is the issue of crossparadigm triangulation. A comment on this issue is provided, in conjunction with
several triangulation analysis strategies.




Analysis Issues

Cornell University


debate has accompanied the emergence of qualitamethodology and the naturalistic paradigm of inquiry
within the evaluation arena. The intensity and persistence of this debate
attests to its importance for both the theory and practice of evaluation.
Though originally focused on the relative merits of quantitative versus
qualitative methods and of positivist versus naturalistic paradigms, the
debate has shifted to questions about the complementarity of these
alternative methods and the degree of cross-perspective integration
possible. This shift signals a greater acceptance of the naturalistic perspective-or at least of qualitative methods-within the evaluation
community. There is also an emerging consensus that inquiry methods
themselves are not inherently linked to one or the other paradigm
AU FHORS NOTE: An earlier version of this article was presented at the Joint Meeting
of the Evaluation Network and the Evaluation Research Society, San Francisco, 1984.
1985 Sage Publications, Inc

No. 5, October 1985 523-545


Downloaded from erx.sagepub.com by Banar Suharjanto on September 17, 2015


(Bednarz, 1983; Patton, 1980; Reichardt and Cook, 1979). Rather,

&dquo;methods are neutral in the sense that a hammer is neutral to its use for
building fine furniture or smashing ants, that is, they serve the purposes
of the researcher&dquo; (Bednarz, 1983: 4).
This consensus about neutrality of methods, along with the
widespread acceptance of qualitative methods, has afforded the
evaluation community a vastly increased repertoire of methodological
tools and has renewed interest in the time-honored methodological
strategy of triangulation (Denzin, 1978; Jick, 1983; Webb et al., 1966,
1980). Broadly defined, triangulation is &dquo;the multiple employment of
sources of data, observers, methods, or theories&dquo; (Bednarz, 1983: 38)
in investigations of the same phenomenon. Between-method
triangulation is the use of two or more different methods to measure
the same phenomenon. The goal of triangulating methods is to
strengthen the validity of the overall findings through congruence
and/or complementarity of the results from each method. Congruence here means similarity, consistency, or convergence of results,
whereas complementarity refers to one set of results enriching, expanding upon, clarifying, or illustrating the other. Thus, the essence of
the triangulation logic is that the methods represent independent
assessments of the same phenomenon and contain offsetting kinds of
bias and measurement error (Campbell and Fiske, 1959).
Despite widespread advocacy of mixed-method evaluation designs
with triangulation of quantitative and qualitative methods, several
major obstacles inhibit their use. First, there is insufficient guidance
regarding the implementation of different mixed-methods designs,
which leads to confusion about the comparative costs and benefits of
designs choices (Mark and Shotland, 1984). Similarly, there are too
few examples of data analysis in mixed-methods research, either in
terms of comparing or integrating results, and even fewer that meaningfully attend to the underlying issue of cross-paradigm triangulation. Both concerns, design and analysis, will be addressed in this
review of a two-part evaluation of program development processes in
an educational organization. Our focus in this discussion is on
triangulation in mixed-method designs employing quantitative and
qualitative methods that are linked to contrasting positivist and
naturalistic paradigms, respectively.

Downloaded from erx.sagepub.com by Banar Suharjanto on September 17, 2015



Mixed-method evaluation designs can be differentiated along two
dimensions: (a) the degree of independence of the quantitative and
qualitative data collection and analysis activities and (b) the degree to
which the implementation of both methods is sequential and iterative
versus concurrent. Siebers (1973) often-cited discussion of &dquo;the integration of fieldwork and survey methods&dquo; in social research emphasizes the benefits accrued from the sequential, iterative use of both
methods by a single (i.e., not independent) researcher or research
team. Madey (1982) provides a similar discussion for evaluation contexts, specifically an evaluation of the federally funded educational
State Capacity Building program. In both examples, the authors
highlight the multiple benefits of interactive mixed-methods inquiry in
terms of design, data collection, analysis, and interpretation of
results. However, these benefits notwithstanding, a nonindependent,
sequential mixed-method strategy loses the capacity for triangulation.
In this strategy, the methods are deliberately interactive, not independent, and they are applied singly over time so that they may or may
not be measuring the same phenomenon.
Trend (1979) and Knapp (1979) illustrate the concurrent use of
survey and ethnographic methods by different members of a project
team for large-scale evaluations of federally funded demonstration
programs in the areas of low-income housing and experimental education, respectively. This strategy can be labelled &dquo;semi-independent,&dquo;
in that the quantitative and qualitative methods were implemented
separately by different individuals, but these individuals had some
degree of interaction and communication during the inquiry process
as members of a common project team. Both authors highlight the
tensions experienced between the quantitative and qualitative components of the evaluation. Trend focuses on tensions incurred in
resolving highly conflicting results, whereas Knapp includes problem
definition, evaluator role conflict, and policy relevance of results in
his discussion of tensions. These examples suggest that a semiindependent, concurrent mixed-method design requires a significant
increase in resources and invokes a variety of tensions. The lack of
methodological independence in this strategy also limits its capacity
for triangulation.

Downloaded from erx.sagepub.com by Banar Suharjanto on September 17, 2015


Examples of relatively independent and concurrent use of quanqualitative methods to study the same phenomenon are
more rare. (Note that an independent, sequential mixed-method
strategy approaches existing practice in the profession at large, with
different researchers/evaluators building on each others work.)
However, there is a clear need for multiple &dquo;competing&dquo; evaluations,
smaller in scope than the single &dquo;blockbuster&dquo; study, differentiated
by the designs and methods of separate project teams (Cronbach and
associates, 1980) or by orientation to a single stakeholder group
(Cohen, 1983; Weiss, 1983). The benefits cited for this strategy are
substantial, including improved project manageability and increased
opportunities for true triangulation.
This brief review underscores the need for comparative assessments
of various mixed-method designs to help dispel current confusion
about their interchangeability. These assessments should clarify the
relative costs and benefits of different designs on such criteria as project management, validity, and utilization of results. As illustrated,
different designs pose different logistical requirements, and the logic
of triangulation requires not just multiple methods but their independent, concurrent implementation as well (McClintock and Greene,
titative and


The literature is even more sparse regarding the actual processes of

mixed-method data analysis. As suggested by Trend (1979: 83), &dquo;the
tendency is to relegate one type of analysis or the other to a secondary
role, according to the nature of the research and the predilections of
the investigators.&dquo; Jick (1983: 142) offered the following reflection on
this issue:
It is a delicate exercise to decide whether or not results have converged. In
theory, multiple confirmation of findings may appear routine. If there is congruence it presumably is apparent. In practice, though, there are a few guidelines
for systematically ordering eclectic data in order to determine congruence or
validity.... Given the differing nature of multimethod results, the determination is likely to be subjective.

subjectivity accompanies all data interpretation. In short, with the

renewed interest in mixed-method designs comes the need for

Downloaded from erx.sagepub.com by Banar Suharjanto on September 17, 2015


systematic strategies for jointly analyzing and triangulating the results

of quantitative and qualitative methods.
The underlying issue here is the possibility of integrating the different paradigms guiding the different methods. That is, can mixedmethod data analysis strategies achieve between-paradigm integration
or must one triangulate results within a single paradigm, relegating
one set of data-either the quantitative or the qualitative-as subsidiary to the other? The essence of the debate on this issue is well captured by its participants, first the &dquo;yes&dquo; position, followed by the
In fact, all of the attnbutes which are said to make up the paradigms are logically
independent. Just as the methods are not logically linked to any of the
paradigmatic attributes, the attributes themselves are not logically linked to each
other.... There is nothing to stop the researcher, except perhaps tradition, from
mixing and matching the attributes from the two paradigms to achieve that combination which is most appropriate for the research problem and setting at hand
(Reichardt and Cook, 1979: 18).

Cross-philosophy triangulation is not possible because of the necessity of subsuming one approach to another. There are conflicting requisites from the totality of the perspective, i.e., the location of causality and its derivatives regarding
validity, reliability, the limits of social science, and its mission.... There have
been calls for the selective use of parts of the &dquo;qualitative and quantitative
paradigms&dquo; in the belief that the researcher can somehow stand outside a
perspective when choosing the ways to conduct social research. I have argued
that even for individuals who can see the differences of alternative perspectives it
is not possible to simultaneously work within them because at certain points in
the research process to adhere to the tenets of one is to violate those of the other.
Put differently, the requirements of differing perspectives are at odds (Bednarz,
1983: 39, 41: emphasis in the original).

(Additional perspectives on this debate are found in Guba and Lincoln, 1981: 76-77; Ianni and Orr, 1979; Patton, 1980; and Smith,
1983a, 1983b.)

To illustrate these mixed-method design and analysis issues, we will
review a two-part evaluation of a structured program development
process used by an adult and community education organization. The
evaluation included both a mail questionnaire administered to a

Downloaded from erx.sagepub.com by Banar Suharjanto on September 17, 2015



sample and on-site, open-ended interviews conducted with

purposively selected state and local staff. Both methods addressed the
same phenomena, but each was designed and implemented by separate
evaluation teams. The evaluation thus represents an independent, concurrent mixed-method design. The evaluation also illustrates several
analysis strategies for comparing quantitative and qualitative results.
Further, the deliberate linkage of the questionnaire with a positivist
perspective and the interview with a naturalistic perspective allowed us
to comment on the issue of cross-perspective triangulation.


The questionnaire and interview components were implemented

concurrently during the Spring of 1984 by two separate evaluation
teams who started with a common conceptual framework, kept each
other informed of activities and progress during the study, but otherwise functioned independently.3 Each set of data was analyzed
separately and summarized in separate reports, from which an integrated summary of major findings and recommendations was
prepared and disseminated within the client organization. The questionnaire component was expected to identify specific changes needed
in the program development process. The emphasis in the interview
component was largely descriptive. In both components, data collec3

tion focused


the nature and role of information used in program

development, including the degree to which current practices of information gathering, exchange, interpretation, and reporting met needs
for program decision making and accountability.
More specifically, the studys conceptual framework focused on information needs in program development and was developed from
literatures on evaluation utilization and organizational decision making. Utilization issues centered on the importance of identifying
evaluation questions that are of priority interest to program
stakeholders (Gold, 1981; Guba and Lincoln, 1981; Patton, 1978).
Evaluation studies that address important stakeholder information
needs are more likely to produce results perceived as useful and actually used. However, understanding these information needs-or more
broadly, the role of information in program decision making-is
complicated considerably by organizational and political factors
(Lotto, 1983; Thompson and King, 1981; Weiss, 1975). For example,

Downloaded from erx.sagepub.com by Banar Suharjanto on September 17, 2015


stakeholder information needs can be influenced by (a) the perceived

and actual models of decision making operating in the organization
(Allison, 1971); (b) the different uses made of information (e.g., instrumental, conceptual and/or symbolic uses, Feldman and March,
1982; Leviton and Hughes, 1981); and (c) the degree of uncertainty
surrounding program goals and which inputs and program processes
would lead to goal attainment (Maynard-Moody and McClintock,
1981).4These and other aspects of the organizational context were included to provide a broader understanding of the role of information
in program development and decision making.
Further, the use of two different methods in the design reflects
theoretical concerns about meaningful assessment of information
needs. In a recent review of initial trials of the stakeholder approach
to evaluation, Weiss (1983) questioned the assumption that decision
makers can articulate in advance their information needs, given the instability and lack of predictability inherent in most organizational
milieus. In the same review, the original architect of this approach
observed that &dquo;effective assessment of stakeholder needs remains a
serious concern&dquo; (Gold, 1983: 68). Evaluation theorists who have addressed this concern have consistently argued for such naturalistic
methods as open-ended interviewing (Cronbach, 1982; Gold, 1983;
Guba and Lincoln, 1981; Patton, 1978). The dual methodology allowed for a test of this argument and provided a basis for assessments of
triangulation design and analysis procedures.
The questionnaire contained 28 sets of questions. Eighteen closeended sets assessed perceived needs for various kinds of information
(e.g., about clients, program resources, management, and outcomes),
the perceived usefulness of various methods of gathering information
for these same information needs, the perceived usefulness of existing
reporting methods for program development and accountability purposes, attitudes toward and inservice needs regarding long-range planning and evaluation, and uncertainties about programs and program
development processes. The 10 open-ended questions generally gave
respondents space to elaborate or comment further on these same
areas (e.g., kinds of information needed but not available). The questionnaire was mailed to selected populations of four stakeholder
groups (county staff and volunteers, campus faculty, and statewide
administrators). Usable questionnaires were returned by 233
respondents representing a 7001o response rate. Analyses were primarily descriptive with many comparisons among the stakeholder groups.

Downloaded from erx.sagepub.com by Banar Suharjanto on September 17, 2015


The on-site, open-ended interviews focused on themes identified in

interview guide-specifically, respondents perceptions of current
program development processes, information uses and needs vis-A-vis
their own program development responsibilities, and reasons for these
needs or anticipated uses of this information. A total of 27 interviews
was conducted with representatives of the same four stakeholder
groups (10 county staff, 11volunteers, 2 campus faculty, and 4
statewide administrators), all representing two counties that had been
purposively selected with a set of sampling criteria.Interviews lasted
45 to 60 minutes, and were conducted on-site (county or campus) by
trained interviewer.Data were analyzed via an inductive, iterative

analysis (Greene et al., 1984).

Assumptive and methodological characteristics


of the questionnaire and the interview are contrasted in Table 1. This table portrays
the deliberate linkage of the two methods with their respective
paradigms. That is, the positivist nature of the questionnaire component is reflected in its intent to derive prescriptions for change from a
deductive analysis of responses on a predetermined set of specific
variables. Criteria of technical rigor guided questionnaire development (e.g., minimum measurement error), data collection (e.g., maximum response rate), and analysis (e.g., statistical significance)
toward a reductionistic prioritizing of major findings. In contrast, the
naturalistic nature of the interview component is reflected in its intent
to describe and understand inductively the domain of inquiry from the
multiple perspective of respondents. Criteria of relevance and emic
meaning guided interview development (e.g., open-ended, unstructured), data collection (e.g., emergent, on-site), and analysis (e.g., inductive, thematic) toward an expansionistic, holistic description of
patterns of meaning in context.



independent, concurrent mixed-method design, the

triangulation analysis is conducted on separately written reports, not
on the raw data. The client also wrote an integrated summary of findings and recommendations for change based on the two reports,
though it was much less detailed than the analysis reported here. Our
efforts to compare, analyze, and integrate the two reports focus on

Downloaded from erx.sagepub.com by Banar Suharjanto on September 17, 2015



Comparison of the Questionnaire and Interview Along

Paradigmatic and Methodological Dimensionsa

Dimensions from Reichardt and Cook (1979) and Guba and Lincoln (1981). Not
applicable to this study and thus excluded are dimensions relevant to issues of causality and to the design and implementation of a treatment.

Downloaded from erx.sagepub.com by Banar Suharjanto on September 17, 2015


three levels:
for change.

descriptive results, major findings, and recommendations

Descriptive results. As shown in Table 2, the descriptive results are

compared by constructing a matrix reflecting the organization and
major substantive areas of each report. The column and row headings
of the matrix represent the major section headings of the questionnaire and interview reports, respectively, and matrix entries represent
examples of results from within one or both report sections. This
strategy allows for comparisons of both specific results and broader

Specific results can be reviewed for between-method congruence

and complementarity. For example, the first section of the questionnaire report presents results on respondent characteristics, and the
first section of the interview report describes characteristics of the
people involved in the organization. Both of these sections contain
similar information on the longevity of volunteers and staff with the
organization, an instance of congruent findings. This questionnaire
report section also notes the role changes of many members during
their tenure with the organization, whereas results in this interview
report section further highlight the strong, positive interpersonal
perceptions among organization members, an instance of complementary findings.
For this particular study, Table 2 illustrates a high degree of
between-method congruence and complementarity of specific results.
The matrix entries include several instances of similar findings from
the two different methods, for example the member longevity noted
above and the lack of utility of information in existing reports for
county level program development efforts. There are also many instances of complementary findings that enrich and expand upon each
other: from the interviews, a description of the informal network of
ongoing communication and support that characterizes the program
development process, and from the questionnaire, perceived needs for
strengthening some communication linkages in this program development network.
This matrix display also allows for a review of broader patterns of
between-method congruence and complementarity of results. Such
patterns can be assessed by comparing the content and organization of
the major sections of each report and by analyzing the pattern of
matrix entries. In this particular study, these patterns reveal a strong
(text contmues on page 536)

Downloaded from erx.sagepub.com by Banar Suharjanto on September 17, 2015










.4 :..0
< A












Downloaded from erx.sagepub.com by Banar Suharjanto on September 17, 2015







Downloaded from erx.sagepub.com by Banar Suharjanto on September 17, 2015












0 .!! ~:

.S !!
d) -

i _

d) 4 a; t3

13 C:


.S .g


r= 11

z CY

Downloaded from erx.sagepub.com by Banar Suharjanto on September 17, 2015


in the two sets of results, providing additional convergent

the overall validity of the findings. The report section
similarities, and most of the matrix entries fall along
the diagonal (a pattern that would hold if the matrix included all
descriptive results). Although part of this parallelism is attributable to
the common purpose guiding both inquiries, very different organizational frameworks could have been expected and accepted from the
two methods.


Major findings. Despite this convergence of descriptive results, the

major findings of the two reports bear little resemblance to one
another either in substance or in form. (See Tables 3 and 4 for these
questionnaire and interview findings, respectively, presented in condensed form.) Substantively, the questionnaire conclusions are
prescriptive, whereas the interview summary themes remain descriptive. In form, the questionnaire conclusions are focused, selective, and
specific (reductionist); the interview themes are broad and general (expansionist) and also incorporate contextual and affective information
obtained during the interview process (tacit knowledge). Further, the
questionnaire conclusions represent derived relationships among
discrete variables (particularistic), whereas interview themes represent
patterns of meaning in context (holistic). In short, the summary findings of each component are highly consistent with the purpose,
assumptions, and characteristics of the differing methodologies used
(refer to Table 1).
This within-method consistency was further pursued by analyzing
the links between the descriptive results and major findings of each
report. The results, illustrated in Tables 3 and 4, reveal the markedly
different analytic processes guiding the two forms of inquiry. For the
questionnaire, the general pattern is a one-to-one mapping of selected
descriptive results on major findings. Overall, this pattern represents
the analytic guidance provided by an a priori set of questions in the
questionnaire component. For example, because the context-relevant
descriptive results-for instance, on respondent characteristics and
perceived strengths of programs-are not related to these a priori
questions, they are not represented in the prescriptive conclusions.
Further, each of the other sets of descriptive results contributes to only one conclusion, and all conclusions are based on only one set of
results. This pattern well reflects the deductive, reductionistic, linear
nature of the quantitative questionnaire data analysis.

Downloaded from erx.sagepub.com by Banar Suharjanto on September 17, 2015








M- I-<

,4Q IM4il40



f VJ






Downloaded from erx.sagepub.com by Banar Suharjanto on September 17, 2015








O 1




-~ ~



F. 04











Downloaded from erx.sagepub.com by Banar Suharjanto on September 17, 2015


In contrast, the interview pattern is


like a web of interconnec-

tions, weaving the threads of the descriptive results into the fabric of

major themes. Again, this pattern overall represents that fact that the
interview analysis was guided only by broad domains of inquiry. More
specifically, (a) nearly all of the interview descriptive results, contextual and substantive, are incorporated into the major thematic findings ; (b) many of these results contribute to more than one theme; and
(c) the summary themes clearly are based on multiple sets of results.
This pattern well reflects the emergent, expansionistic, holistic nature
of the qualitative interview data analysis.
Moreover, we believe that this within-method consistency for both
study components, revealed in the differing substance, form, and
derivation of their summary findings, is largely attributable to the independence of the two efforts. In our view, this independence preserved the assumptive and methodological integrity of each component.
But, what are the implications of these different summary findings for
between-method triangulation? For answers to this, we now turn to an
analysis of recommendations for change.
Recommendations for change. The questionnaire data identify improvements needed in the program development process, whereas the
interview data describe the complexities and details of this process as
conducted in the two selected counties. These results are consistent
with the clients expectation that the detail and depth of the interview
findings would make questionnaire recommendations more meaningful and easier to interpret.
Implementation of the questionnaire-based recommendations by
themselves is constrained by two limitations of the methodology, one
obvious and the other less apparent. First, a mail questionnaire, even
with pilot testing and open-ended questions, is limited in its capacity
for representing details of description, nuances of meaning, and patterns of interaction, a limitation especially problematic for crosssectional designs. With the addition of the contextual data from the
interviews, this limitation is at least partially countered. As shown in
Tables 2, 3, and 4, the interview data highlight the complexity of information sharing both inside and outside the organization. There are
many formal actors in this process: county and state staff, volunteers,
campus faculty, outside agency personnel, and community leaders.
Interview analyses identify the strength of relations among individual
actors, the network of interconnections among groups of actors, and

Downloaded from erx.sagepub.com by Banar Suharjanto on September 17, 2015


the types of information exchange that occur. Interview data also portray the feelings and values of the various actors for each others contributions and for the organization as a whole.
The second limitation of the mail questionnaire is, ironically, also
one of its major strengths. With a mail questionnaire, it is possible to
collect data from a larger cross section of the population for a given
cost and, therefore, to attempt to make the recommendations for
change representative of the respondent groups sampled. In an actionoriented study, however, it is often necessary to ask the kinds of
specific, detailed questions that render the questionnaire too specialized for some respondent groups. This is the case in the present study, in
which the response rate for the volunteers stratum (53 %) is much
lower than that for the rest of the sample (86%). Thus, although
the questionnaire is more representative of the organization on a
statewide basis than is the interview, it systematically excludes
respondents who feel marginal to the formalized aspects of program
development represented in the questionnaire. The interview format
more successfully captures the perceptions and understandings of all
participants in the program development process. The interviews
more integrated portrayal of this process thereby strengthens the final

questionnaire-based recommendations for change by (a) reducing the

likelihood that recommendations will be ignored by grounding them
in an existing strong link or useful exchange within the network; (b)
focusing recommendations on specific types of actors or groups that
are connected to others by weak links; (c) incorporating into recommendations the perceptions and values that are important to the actors ; and (d) reducing the risk of recommended new activities replacing useful existing ones.
What emerges from the results of the two components is a set of
recommendations for change that has structure, substance, and
strength. If one imagines the entire set of change recommendations as
a tent, the questionnaire data help to determine the shape of the structure and the number and placement of the stakes. The interview findings provide the connecting ropes and the degree of tension or slack
that must be applied to maintain the integrity of the structure under
varying conditions. This imagery also aptly describes the successful
process and product of complementarity between-method triangulation, in that the results of one method (the interview in this case) serve

Downloaded from erx.sagepub.com by Banar Suharjanto on September 17, 2015




complement, enrich, and thereby strengthen the implica-

tions of the other.

Summary. This effort at between-method triangulation of results

reveals congruence and complementarity at the level of specific
results, significant substantive and structural differences at the level of
major findings that preclude meaningful integration, and complementarity again at the level of recommendations for change. This effort
also illustrates several strategies for comparing and analyzing results
from quantitative and qualitative methods. Given the deliberate linking of method with paradigm in this study, does this effort also constitute an example of cross-paradigm triangulation?
We think not. Following Bednarz (1983), Guba and Lincoln (1981),
and Smith (1983a, 1983b), we suggest that triangulation is possible only within paradigms, that any effort to compare or integrate findings
from different methods requires the prior adoption of one paradigm
or the other, even when-as was true in this study-the methods
themselves are linked to and implemented within alternative

The integrity of the method-paradigm linkage in the present study
is illustrated by the differences in the major findings of each component. Each set of findings well represents the contrasting assumptions
of the methodology used. Further, it is precisely these differences that
thwart triangulation efforts at this level. More successful, in terms of
congruence and complementarity of findings, are triangulation efforts
at the levels of specific descriptive findings and discrete recommendations for change. This specificity and discreteness, however, reflect the
particularistic, reductionist stances of the questionnaire paradigm, not
the expansionist, holistic stances of the interview paradigm.
Further, in the clients written summary of recommendations for
change, interview results were consciously allocated a secondary, supportive role (Trend, 1979), as consistent with the study objectives.
This all argues that our triangulation effort was conducted not across
paradigms, but rather from the perspective represented by the questionnaire. To reinforce this point, we imagined what this effort would
have looked like if conducted from the perspective represented by the
interview. Our speculations suggest that very little of any questionnaire data would fit with, make sense, or otherwise be of convergent
or complementary value to the interview results.

Downloaded from erx.sagepub.com by Banar Suharjanto on September 17, 2015


This article uses the results of a mixed-method evaluation to illustrate significant design and analysis issues related to integrating
quantitative and qualitative methods, specifically between-method
and cross-paradigm triangulation. The strong link between method
and paradigm deliberately established in this study significantly
facilitated the discussion.
The mixed-method evaluation design involved the independent,
concurrent implementation of a quantitative questionnaire and a
qualitative interview guide, both investigating the same phenomena.
The benefits of this mixed-method strategy appear to be twofold.
First, unique to this strategy, the independence of the two study components preserved the assumptive and methodological integrity of
each, thus maximizing the intended value of each set of results and
avoiding the kinds of between-method tensions reported by Trend
(1979) and Knapp (1979). Second, opportunities for triangulation of
results were significantly aided by the independent and concurrent implementation of both components. As illustrated, between-method
triangulation of results can enhance fulfillment of study objectives
beyond that provided by a single method (though increased costs,
notably evaluator time, must also be noted). This illustration also offered several specific strategies for triangulated analysis of quantitative and qualitative results.
However, even with the method-paradigm linkage, we have argued
that the triangulation effort in this study was conducted from the
perspective represented by the questionnaire and thus does not constitute an instance of cross-paradigm triangulation. Different
epistemological origins and assumptions preclude the possibility or
sensibility of cross-paradigm triangulation.

1. For simplicity, the terms "paradigm" and "perspective" will be used interchangeably, and the labels positivist and naturalistic will be used respectively to refer to
(a) the traditional, dominant perspectives of logical positivism and postpositivism,
realism, experimentalism and (b) the emergent perspectives of idealism, phenomenology, symbolic interactionism, and ethnomethodology in the evaluation field.

Downloaded from erx.sagepub.com by Banar Suharjanto on September 17, 2015


Differences among members of these two camps of perspectives will not be addressed (see Bednarz, 1983; Norris, 1983; Phillips, 1983; and Smith, 1983a, 1983b). The
quantitative and qualitative labels will be reserved for types of methods and data.
2. For discussions of the substantive findings of the questionnaire and interview
components of this study, see McClintock and Nocera (1984) and Greene (1984),

3. Three individuals were members of both teams.
4. Other major influences on information needs are the cognitive processes and
personality styles of the individual users of information. See Nisbett and Ross (1980)
and Kilmann (1979) for two different approaches to understanding these individual difference factors.
5. In one of the counties, the interviews preceded the questionnaire, whereas the
order was reversed in the other. The interview data from both counties were similar, and
the questionnaire data consistent with responses statewide. Thus, the double assessment
in these two counties did not seem to affect their results.
6. Both the questionnaire and the interviews also served instructional purposes, providing field experiences for graduate courses in evaluation methods.

ALLISON, G. T. (1971) Essence of Decision: Explaining the Cuban Missile Crisis.
Boston: Little, Brown.
BEDNARZ, D. (1983) "Quantity and quality in evaluation research: A divergent
view." Revised version of paper presented at the Joint Meeting of the Evaluation
Network and the Evaluation Research Society, Chicago.
CAMPBELL, D. T. and D. W. FISKE (1959) "Convergent and discriminant validation
by the multitrait-multimethod matrix." Psychological Bulletin 56: 81-106.
COHEN, D. K. (1983) "Evaluation and reform," in A. S. Bryk (ed.) StakeholderBased Evaluation. Beverly Hills, CA: Sage.
CRONBACH, L. J. (1982) Designing Evaluations of Educational and Social Programs.
San Francisco: Jossey-Bass.
&mdash;&mdash;&mdash;and associates (1980) Toward Reform of Program Evaluation. San Francisco:
DENZIN, N. K. (1978) The Research Act: An Introduction to Sociological Methods.
New York: McGraw-Hill.

FELDMAN, M. S and J. G. MARCH (1982) "Information in organization as signal and

symbol." Administrative Science Quarterly 26: 71-186.
GOLD, N. (1983) "Stakeholders and program evaluation: Characterizations and
reflections," in A. S. Bryk (ed.) Stakeholder-Based Evaluation. San Francisco:
&mdash;&mdash;&mdash;(1981) The Stakeholder Process in Educational Program Evaluation.
Washington, DC: National Institute of Education.

Downloaded from erx.sagepub.com by Banar Suharjanto on September 17, 2015


GREENE, J. C. (1984) "Toward enhancing evaluation

use: organizational and

Presented at the Joint Meeting of the Evaluation
Network and the Evaluation Research Society, San Francisco.
J. L. COMPTON, B. RUIZ, and H. SAPPINGTON (1984) "Successful
strategies for implementing qualitative methods." Unpublished manuscript.
GUBA, E. G. and Y. S. LINCOLN (1981) Effective Evaluation. San Francisco:

methodological perspectives."


IANNI, F. A. and M. T. ORR (1979) "Toward

a rapprochement of quantitative and

qualitative methodologies," in T. D. Cook and C. S. Reichardt (eds.) Qualitative
and Quantitative Methods in Evaluation Research. Beverly Hills, CA: Sage.
JICK, T. D. (1983) "Mixing qualitative and quantitative methods: Triangulation in
action," in J. Van Maanen (ed.) Qualitative Methodology. Beverly Hills, CA: Sage.
KILMANN, R. H. (1979) Social Systems Design. New York: North-Holland.
KNAPP, M. S. (1979) "Ethnographic contributions to evaluation research," in T. D.
Cook and C. S. Reichardt (eds.) Qualitative and Quantitative Methods in Evaluation Research. Beverly Hills, CA: Sage.

on the utilization of evaluareview and synthesis." Evaluation Review 5: 525-548.

LOTTO, L. S. (1983) "Revisiting the role of organizational effectiveness in education
evaluation." Educational Evaluation and Policy Analysis 5: 367-378.
MADEY, D. L. (1982) "Some benefits of integrating qualitative and quantitative
methods in program evaluation, with illustrations." Educational Evaluation and
Policy Analysis 4: 223-236.
MARK, M. M. and R. L. SHOTLAND (1984) "Problems in drawing inferences from
multiple methodologies." Presented at the Joint Meeting of the Evaluation
Network and the Evaluation Research Society, San Francisco.
MAYNARD-MOODY, S. and C. McCLINTOCK (1981) "Square pegs in round holes:
program evaluation and organizational uncertainty." Policy Studies Journal
9: 644-666.
McCLINTOCK, C. and J. C. GREENE (forthcoming) "Triangulation in practice."
Evaluation and Program Planning.
&mdash;&mdash;&mdash;(1984) "Conceptual and methodological considerations in assessing
information needs for planning and evaluation." Program Evaluation Studies
Paper Series 6, Department of Human Service Studies, Cornell University.
McCLINTOCK, C. and C. NOCERA (1984) "Information management in program
planning and evaluation: a study of program development in Extension home
economics." Program Evaluation Studies Paper Series 8, Department of Human
Services Studies, Cornell University.
NISBETT, R. and L. ROSS (1980) Human Inference: Strategies and Shortcomings of
Social Judgment. Englewood Cliffs, NJ: Prentice-Hall.
NORRIS, S. P. (1983) "The inconsistencies at the foundation of construct validation
theory," In E. R. House (ed.) Philosophy of Evaluation, New Directions for Program Evaluation 19. Beverly Hills, CA: Sage.
PATTON, M. Q. (1980) Qualitative Evaluation Methods. Beverly Hills, CA: Sage.
&mdash;&mdash;&mdash;(1978) Utilization-Focused Evaluation. Beverly Hills, CA: Sage.
PHILLIPS, D.C. (1983) "After the wake: postpositivist educational thought."
Educational Researcher 12: 4-12.

LEVITON, C. C. and E. F. HUGHES (1981) "Research

tion :

Downloaded from erx.sagepub.com by Banar Suharjanto on September 17, 2015


REICHARDT, C. S. and T. D. COOK (1979) "Beyond qualitative versus quantitative

methods," in T. D. Cook and C. S. Reichardt (eds.) Qualitative and Quantitative
Methods in Evaluation Research. Beverly Hills, CA: Sage.
SIEBER, S. D. (1973) "The integration of field work and survey methods." American
Journal of


78: 135-159.

SMITH, J. K. (1983a) "Quantitative versus qualitative research: An attempt to

clarify the issue." Educational Researcher 12: 6-13.
&mdash;&mdash;&mdash;(1983b) "Quantitative versus interpretive: The problem of conducting social
inquiry," in E. R. House (ed.) Philosophy of Evaluation, New Directions for
Program Evaluation 19. Beverly Hills, CA: Sage.
THOMPSON, B. and J. A. KING (1981). "Evaluation utilization: a literature
review and research agenda." Presented at the Annual Meeting of the American
Educational Research Association, Los Angeles.
TREND, M. G. (1979) "On the reconciliation of qualitative and quantitative analyses:
a case study," in T. D. Cook and C. S. Reichardt (eds.) Qualitative and
Quantitative Methods in Evaluation Research. Beverly Hills, CA: Sage.
(1980) Nonreactive Measures in Social Sciences. Chicago: Rand McNally.
&mdash;&mdash;&mdash;(1966) Unobtrusive measures: Nonreactive Research in the Social Sciences.
Chicago: Rand McNally.
WEISS, C. H. (1983) "Toward the future of stakeholder approaches in evaluation,"
in A. S. Bryk (ed.) Stakeholder-Based Evaluation. San Francisco: Jossey-Bass.
&mdash;&mdash;&mdash;(1975) "Evaluation research in the political context," in E. L. Streuning and
M. Guttentag (eds.) Handbook of Evaluation Research. Beverly Hills, CA: Sage.

Jennifer Greene is Assistant Professor of Human Service Studies at Cornell University.

Her extensive field experience in educational and other human service program evaluation have led to research interests in the areas of evaluation methodology and information utilization. Her current research efforts focus on the balance between userresponsiveness and technical quality in evaluation via case study investigation of the
stakeholder approach to evaluation.
Charles McClmtock is Assistant Dean for Educational Programs and Policy in the College of Human Ecology at Cornell University. He is also Associate Professor in the
Department of Human Service Studies, where his teaching and research interests focus
on evaluation methods and information management in organizations.

Downloaded from erx.sagepub.com by Banar Suharjanto on September 17, 2015