Académique Documents
Professionnel Documents
Culture Documents
discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/304490736
CITATIONS READS
0 194
2 authors:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Fons Van de Vijver on 27 June 2016.
enhance methodological rigor. Bias and equivalence are generic terms referring to
measurement issues that challenge inferences from research studies. The concepts equally
apply to qualitative and quantitative studies, but, as they originate in quantitative cross-
cultural studies, the following discussion mainly derives from the quantitative tradition.
Bias refers to systematic errors that threaten the validity of measures administered in different
cultures. The existence of bias implies that differences in observed scores may not
correspond to genuine differences in the target construct. If not taken into account, bias can
comparability of scores across cultures. Minimizing bias and maximizing equivalence are the
prerequisites for valid comparison of cultures. Three types of bias and three levels of
research are considered, after which we describe ways of dealing with bias and enhancing
equivalence.
Construct bias, method bias, and item bias are distinguished on the basis of the sources of
incomparability. Construct bias occurs when, in a cross-cultural study the target construct
varies in meaning across the cultures. For example, filial piety in most cultures is defined as
respect for, love of, and obedience to parents, whereas, in East Asia, this concept broadens to
bearing a large responsibility to parents, including caring for them when they grow old and
need help.
CROSS-CULTURAL METHODOLOGY 2
Method bias involves incomparability resulting from sampling, assessment instruments, and
administration procedures. Sample bias refers to the difference in sample characteristics, such
as age, gender, and socioeconomic status, which have a bearing on observed score
differences. Instrument bias refers to artifacts of instruments, such as response styles when
Likert-scale responses are used, and stimulus familiarity (the different levels of familiarity of
respondents with the assessment instrument). Administration bias can arise from cross-
cultural variations in the impact of the conditions of administration of the assessment (e.g.,
Item bias, also known as Differential Item Functioning (DIF), occurs if persons with the same
trait (or ability) level, but coming from different cultures, are not equally likely to endorse the
item (or solve it correctly). As a result, an item may have a different psychological meaning
across cultures. The sources of item bias can be both linguistic (e.g., lack of translatability or
equivalence have been established. Construct equivalence means that the same theoretical
construct is measured across all cultures. Without construct equivalence, there is no basis for
cross-cultural comparison. Metric equivalence implies that different measures at the interval
or ratio level have the same measurement unit across cultures but different origins. An analog
is temperature measurement using degrees Celsius and Kelvin; the measurement unit (the
degree) of the two measures is the same, yet their origins are 273.15 degrees different
(degrees Kelvin = degrees Celsius + 273.15). Scores may be compared within cultural groups
(e.g., male and female differences can be tested in each culture) but cannot be compared
directly across cultures unless the measurement scales have the same measurement unit and
CROSS-CULTURAL METHODOLOGY 3
origin (i.e., scalar equivalence). In this case, statistical procedures that compare means across
and quantitative studies. Dealing with bias can be split up into approaches that focus on
design (a priori procedures) and approaches that focus on analysis (a posteriori procedures).
The former aims to change the design (sampling, assessment instruments, and their
administration) in such a way that bias can be eliminated or at least controlled as much as
possible. For example, items that are difficult to translate can be adapted to increase their
applicability in the target culture. Design changes to avoid bias issues are often equally
By contrast, the procedures to deal with bias after the data have been collected are often
different for qualitative and quantitative studies. Quantitative researchers have a large set of
statistical procedures available to deal with bias in data, as outlined below. However, there
are also various procedures, such as in-depth and cognitive interviews, a probing technique to
reveal the thought processes of respondents who are presented with test items, can be used in
both qualitative and quantitative studies to reduce bias. For example, follow-up interviews
with participants or subgroups that manifested an extreme position on the target construct can
be useful for understanding the extent to which the extreme position is a consequence of
cultural invariance (identity) of the structure of the construct and the adequacy of items used
for assessment. Both exploratory factor analysis (EFA) and confirmatory factor analysis
(CFA) can be used to detect construct bias or to ensure construct equivalence. When an
CROSS-CULTURAL METHODOLOGY 4
instrument is lengthy or does not have a clearly-defined expected structure, EFA is preferred.
The rationale of this approach is that the identity of factors (or dimensions) in an instrument
used to assess the target construct across cultures is sufficient evidence for equivalence. The
identity of factors is tested through target rotations of the factor structure across cultures.
When the expected structure of the target construct can be derived from theory or previous
research, CFA is more appropriate in confirming or disconfirming equivalence. CFA can test
which can satisfy construct, metric, and scalar equivalence. Given the high level of detail and
statistical rigor that exist in CFA, it is perhaps the most advanced procedure for establishing
quantitative equivalence. When there is construct bias, researchers should acknowledge the
incompleteness of the construct and may compare equivalent subfacets, such as the
comparison of the common operationalization of filial piety in Western and East Asian
In qualitative studies, there are fewer established procedures to deal with construct bias. The
definition of the target construct may be clarified through the interactions of investigators
with cultural experts. An in-depth analysis may reveal the common and unique aspects of a
construct across cultures. The need to evaluate existing, usually Western-based theoretical
The procedures for reducing construct bias in quantitative and qualitative studies have their
own strengths and weaknesses. The power of the quantitative approach is statistical rigor,
which provides a firm basis for drawing conclusions about construct bias. Comparable rigor
analyses can reveal whether, say, depression, as measured by the Beck Depression Inventory,
CROSS-CULTURAL METHODOLOGY 5
has the same underlying factors in all cultures that a particular study involves, but these
analyses cannot demonstrate that the instrument addresses salient aspects of depression in
specific cultures. This latter aspect, namely, exhaustively identifying the culturally-relevant
sources of method bias are: Small, biased samples, especially when target respondents are
difficult to recruit (thus challenging the generalizability of findings to new samples from the
same population); response styles (such as social desirability); and the obtrusiveness of
observation). Changes in the research design so as to minimize the impact of the method used
bias sources by adapting the design of a study. An example is the administration of a social-
for the research team to develop a coding scheme, provided they are involved in every stage
of the qualitative study. If necessary, external cultural experts, as well as interviewees, can be
invited to scrutinize the interpretation of qualitative data, and a second interview can be
Problems with the cultural appropriateness of specific words in items or of entire items or the
poor translatability of words often underlie item bias. For example, an item that asks for the
CROSS-CULTURAL METHODOLOGY 6
number of times a woman has been pregnant may be perceived as an indirect question about
the number of abortions she has had, notably in countries where abortion is illegal. Numerous
quantitative procedures have been developed to analyze item bias, such as structural equation
modeling, item response theory, the Mantel-Haenszel approach, and logistic regression.
Denzin, N. (2006). Sociological methods: A sourcebook (5th ed.). Piscataway, NJ: Aldine
Transaction.
Harkness, J. A., Van de Vijver, F. J. R., & Mohler, P. Ph. (Eds.). (2003). Cross-cultural
Matsumoto, D., & Van de Vijver, F. J. R. (Eds.). (2011). Cross-cultural research methods in
Van de Vijver, F. J. R., & Leung, K. (1997). Methods and data analysis for cross-cultural