Académique Documents
Professionnel Documents
Culture Documents
The purpose of this article is to discuss the conceptual framework and strategies used in theory-driven evaluations in
relation to mixed methods research and to explore the opportunities and challenges emerging from theorydriven
applications. Theory-driven evaluations have frequently applied mixed methods in the past, and these experiences
provide some insightful information for future development of mixed methods. In theory-driven evaluations, the
application of mixed methods is justified and applied under a conceptual framework called program theory. The
conceptual framework of program theory provides a plan and agenda for mixed methods to work collaboratively and
de-emphasizes their differences and incompatibilities. Based upon the conceptual framework of program theory, this
article provides several strategies for combining qualitative and quantitative methods in theory-driven evaluations.
Procedures in applying these strategies are systematically illustrated. Finally, this article discusses challenging issues
related to the future development of mixed methods, such as implications of the use of pure versus modified forms of
mixed methods and the advocacy of mixed methods research as a method paradigm versus a method use
paradigm.
Spring 2006
75
HUEY T. CHEN
An analogy to the use of mixed methods in theorydriven evaluation is that of two experts, while equally
skilled in their crafts, having very different skills and
incompatible views and preferences. Because they have
incompatible priorities and standards, it is difficult for
them to appreciate the other persons skills and
potential contributions, making it challenging to work
together to accomplish a task. Working together would
be possible, however, if there were a master plan that
carefully specified their roles and responsibilities for
accomplishing the task. The division of labor provided
by the plan would allow two experts to contribute their
skills effectively and without changing their own
world-views or preferences. The conceptual framework
of program theory offers such a plan for theory-driven
evaluations to use mixed methods. The conceptual
framework of program theory serves as a superordinate
goal for quantitative and qualitative methods to pursue
jointly (Chen, 1997a). The superordinate goal provides
opportunities for qualitative and quantitative methods
to use their strengths to make contributions to
achieving the common goal and de-emphasizes their
differences and incompatibilities. By focusing on the
superordinate goal, potential conflicts and tensions of
qualitative and quantitative methods are minimized.
The Conceptual Framework of Program Theory
When key stakeholders design or implement an
intervention program, they usually have some ideas
about how the program should be constructed and why
the program is supposed to work. Program theory is
defined as a set of explicit and/or implicit assumptions
held by stakeholders about what actions are required to
solve a social problem and why the problem will
respond to these actions (Chen, 2005). A program
theory is the stakeholders theory. However,
stakeholders usually do not clearly and systematically
document their program theories. In conducting theorydriven evaluations, evaluators need to facilitate
stakeholders clarification of their program theories.
Chen (2005) provides a conceptual framework for
program theory that is useful in guiding evaluators in
facilitating stakeholders clarification of their program
theories. This conceptual framework is illustrated in
Figure 1.
Figure 1 indicates that a program theory consists
of two models: an action model and a change model.
The change model at the bottom of Figure 1 depicts the
causal process generated by the program. The change
model consists of the following three components: (a)
intervention, which refers to a set of program activities
that focus on changing the determinants and outcomes;
(b) determinants, which refers to leverages or
mechanisms that mediate between the intervention and
outcomes; and (c) outcomes, which refers to the
Spring 2006
76
Spring 2006
77
HUEY T. CHEN
Switch Strategy
In this switch strategy, one first applies qualitative
methods to clarify stakeholders program theory and
then uses quantitative methods to assess the program
theory. The switch strategy (qualitative then
quantitative sequence) also is frequently used in
theory-driven outcome evaluation to assess the change
model (Chen, 1990). The procedures related to such an
application are: (a) intensive interviews, and (b)
working groups.
Intensive interviews. In this research mode, an
evaluator carries out one-to-one interviews with key
stakeholders to facilitate making explicit their
assumptions about the major program components,
processes, and outcomes. This assists them in explicitly
formulating their program theories. This method is
often used with small-scale programs.
Working groups. Large-scale programs tend to
have many diverse and often vocal stakeholders. In
such programs, program theory needs to be developed
Table 1
Strategies for Applying Mixed Methods in Theory-Driven Evaluation
Program Theory
Empirical Assessment
Strategies
Clarification
Qualitative
Quantitative
Spring 2006
Quantitative
Switch
Complementarity
Contextual overlaying
Triangulation
Qualitative
Switch
Complementarity
Contextual overlaying
Triangulation
78
79
HUEY T. CHEN
80
HUEY T. CHEN
Summary
This article describes the conceptual framework of
program theory used in theory-driven evaluations
employing mixed methods. Under this framework, the
following strategies have been used to combine
systematically qualitative and quantitative methods in
an evaluation: (a) switch, (b) complementarity, (c)
contextual overlaying, and (d) triangulation. Benefits
and challenges in applying these strategies were
systematically illustrated. Two types of mixed methods
were also distinguished: pure and adapted. Because the
mixed methods tradition still lacks its own body of
unique methods that distinguish it from qualitative and
quantitative methods, the article advocates for mixed
methods as a method use paradigm rather than a
method paradigm at the current time.
82
Chen, H. T. (1997b). Normative evaluation of an antidrug abuse program. Evaluation and Program
Planning 12, 195-204.
Chen, H. T. (2005). Practical program evaluation:
Assessing
and
improving
planning,
implementation, and effectiveness. Thousand
Oaks, CA: Sage.
Chen, H. T., Grimley, D., Aban, C., Waithaka, Y., &
Bachmann, L. (2006, in progress). An
evaluation of the implementation of a
computer-based, provider-delivered HIV
prevention intervention in the primary care
setting. Manuscript in preparation.
Chen, H.T., Quane, J., Garland, N., & Mracin, P.
(1988). Evaluating an anti-smoking program:
Diagnostics
of
underlying
causal
mechanism. Evaluation and the Health
Professions, 11, 441-464.
Cook, T. D., & Campbell, D. T. (1979). Quasiexperimentation: Design and analysis issues
for field settings. Boston: Houghton Mifflin.
Denzin, N. K. (1970). The research act in sociology.
London: Butterworth.
Gottlieb, N. H., Lovata, C. Y., Weinstein, R., Gree, L.
W., & Eriksen, M. P. (1992). The
implementation of a restrictive worksite
smoking policy in a large decentralized
organization. Health Education Quarterly,
19(1), 77-100.
Greene, J., & Caracelli, V. (1997). Advances in mixed
methods evaluation: The challenge and
benefits of integrating diverse paradigms. San
Francisco: Jossey-Bass.
Guba, E. G. (1990). The alternative paradigm dialog.
In E. G. Guba (Ed.), The paradigm dialog (pp.
17-27). Newbury Park, CA: Sage.
Howe, K. R. (1988). Against the quantitative
qualitative incompatability thesis or dogmas
die hard. Educational Researcher, 17(8), 1016.
Johnson, R. B., & Onwuegbuzie, A. J. (2004). Mixed
methods research: A research paradigm whose
time has come. Educational Researcher,
33(7), 14-26.
Maruyama, G. M. (1998). Basics of structural equation
modeling. Thousand Oaks, CA: Sage.
Sechrest, L., & Sidani, S. (1995). Quantitative and
qualitative methods: Is there an alternative?
Evaluation and Program Planning, 18, 77-87.
Shadish, W. R., Cook, T. D., & Campbell, D. T.
(2002). Experimental and quasi-experimental
designs for generalized causal inference.
Boston: Houghton- Mifflin.
Spring 2006
83