Vous êtes sur la page 1sur 1

UNIT

II Research Design And Measurement


Research Design Definition: Research design expresses both the structure of the
research problemthe frame-work, organization, or configuration of the
relationships among variables of a studyand the plan of investigation used to
obtain empirical evidence on those relationships. Essentials of Research Design:
An activity- and time-based plan; plan always based on the research question; A
guide for selecting sources and types of information; A framework for specifying
the relationships among the studys variables; A procedural outline for every
research activity.
Types of Research Design: 1) Descriptive Detailed descriptions of specific
situations using interviews, observations, document overview, numerical
descriptions; e.g., case-study, naturalistic observation, survey; 2) Correlational
Quantitative analyses of the strength of relationships between two or more
variables e.g., case-control study, observational study; 3) Semi-experimental
Comparing a group that gets a particular intervention with another group that is
similar in characteristic but did not receive the intervention. e.g., field experiment,
quasi-experiment; 4) Experimental Assigning an intervention to selected groups
by random assignment e.g., experiment with random assignment; 5) Meta-analytic
e.g., meta-analysis.
Descriptors of research design: 1) The degree to which the research question has
been crystalized Exploratory Study, Formal Study. 2) The method of data
collection Monitoring, Communication study. 3) The power of researcher to
produce effects in the variables under study Experimental, Ex post facto. 4) The
purpose of the study Reporting, Descriptive, Casual Explanatory, Predictive. 5)
The time dimension Cross-sectional, Longitudinal. The topic scope breadth and
depth of the study Case & Statistical Study. 6) The research environment - Field
setting, Laboratory research, Simulation. 7) The participants perceptions of
research activity - Actual routine, Modified routine.
Definitions: Exploratory studies: tend toward loose structures with the objective
of discovering future research tasks. The immediate purpose of exploration is
usually to develop hypotheses or questions for further research. Formal Study: it
begins with a hypothesis or research question and involves precise procedures and
data source specifications. The goal of a formal research design is to test the
hypotheses or answer the research questions posed. Monitoring: includes studies
in which the researcher inspects the activities of a subject or the nature of some
material without attempting to elicit responses from anyone. Communication
study: the researcher questions the subjects and collects their responses by
personal or impersonal means. Experiment: the researcher attempts to control
and/or manipulate the variables in the study. Ex post facto design: investigators
have no control over the variables in the sense of being able to manipulate them.
They can only report what has happened or what is happening. Reporting study
provides a summation of data, often recasting data to achieve a deeper
understanding or to generate statistics for comparison. Causal-Explanatory: a
study is concerned with learning whythat is, how one variable produces changes
in another. Causal-Predictive: attempts to predict an effect on one variable by
manipulating another variable while holding all other variables constant. Crosssectional studies: are carried out once and represent a snapshot of one point in
time. Longitudinal studies: are repeated over an extended period. Statistical
studies: are designed for breadth rather than depth. They attempt to capture a
populations characteristics by making inferences from a samples characteristics.
Hypotheses are tested quantitatively. Generalizations about findings are
presented based on the representativeness of the sample and the validity of the
design. Case studies: place more emphasis on a full contextual analysis of fewer
events or conditions and their interrelations. Although hypotheses are often used,
the reliance on qualitative data makes support or rejection more difficult. An
emphasis on detail provides valuable insight for problem solving, evaluation, and
strategy. This detail is secured from multiple sources of information. It allows
evidence to be verified and avoids missing data. Designs also differ as to whether
they occur under actual environmental conditions (Field conditions) or under
staged or manipulated conditions (Laboratory conditions). Simulation: To
replicate the essence of a system or process. Participants perceptual awareness:
when people in a disguised study perceive that research is being conducted.
Participants perceptual awareness influences the outcomes of the research in
subtle ways or more dramatically.
Exploratory and Casual Research Design: Exploratory research design relies heavily
on Qualitative techniques and these are the four exploratory techniques: a)
Secondary data analysis: Doing study on the studies made by others for their own
purposes; b) Experience surveys: seek Interviewee ideas about important issues or
aspects of the subject and discover what is important across the subjects range of
knowledge; c) Focus groups: Group of people and a Moderator meet and
Moderator use group dynamics principles to focus or guide the group in exchange
of ideas, feelings and experience on a specific topic; d) Two-stage designs: (i)
clearly defining the research question and (ii) developing the research design.
Casual Research Design: The essential element of causation is that A produces
B or A forces B to occur. The ideal standard of causation requires that one
variable always causes another and no other variable has the same causal effect.
Method of Agreement: (John Stuart Mill) When two or more cases of a given
phenomenon have one and only one condition in common, then that condition
may be regarded as the cause (or effect) of the phenomenon. Method of
Difference: (John Stuart Mill) If there are two or more cases, and in one of them
observation can be made, and if variable C occurs when observation Z is made, and
does not occur when observation Z is not made; then it can be asserted that there

is a causal relationship between C and Z. Causal Hypothesis Testing: 1.Covariation


between A and B, 2. Time order of events moving the hypothesized direction, 3.
No other possible causes of B. Random Assignment: All factors (except DV) must
be held constant and not go against another variable & each factor must have
equal chance. Relationship between two variables: 1) Symmetrical: is one in
which two variables fluctuate together; 2) Reciprocal: when two variables mutually
influence or reinforce each other; 3) Asymmetrical: Changes in one variable (IV)
responsible for changes in another variable (DV); Types of Asymmetrical: StimulusResponse; Property-Disposition(nature); Disposition-Behavior; Property-Behavior.
Descriptive and Experimental Design: Descriptive is a more formalized study and
its objectives are 1. Descriptions of phenomena or characteristics associated with
a subject population (the who, what, when, where, and how of a topic).2.
Estimates of the proportions of a population that have these characteristics. 3.
Discovery of associations among different variables.
Experimental Design: Experiments are studies involving intervention by the
researcher beyond that required for measurement. The usual intervention is to
manipulate some variable in a setting and observe how it affects the subjects being
studied (e.g., people or physical entities). The researcher manipulates the
independent or explanatory variable and then observes whether the hypothesized
dependent variable is affected by the intervention. Advantages: 1) the
researchers ability to manipulate the independent variable; 2) contamination
from extraneous variables can be controlled; 3) the convenience and cost of
experimentation are superior to other methods; 4) replicationrepeating an
experiment with different subject groups and conditions. Disadvantages: 1) The
artificiality of the laboratory; 2) generalization from nonprobability samples; 3)
Sometimes Outrun the budget; 4) It is only effectively targeted at problems of the
present or immediate future; 5) Sometimes the study is not so ethical. Steps for
conducting an experiment: 1. Select relevant variables. 2. Specify the treatment
levels. 3. Control the experimental environment. 4. Choose the experimental
design. 5. Select and assign the subjects. 6. Pilot test, revise, and test. 7. Analyze
the data.
Different types of experimental design: 1) Repeated measures design (or withinsubjects design) requires one group of samples or participants. This same group is
exposed to all of the levels of the independent variable of interest. 2) Independent
samples design (or between-subjects design), the samples or participants are
assigned into equally sized groups and each group receives a different treatment.
3) Matched pairs design the samples or participants are matched into pairs with
most similarity to each other and each member of the pair is randomly assigned to
a different experimental condition. 4) Factorial design is used where there are
several independent variables and the researcher is interested in their combined
effect on the dependent variable.
The many experimental designs vary widely in their power to control
contamination of the relationship between independent and dependent
variables. The most widely accepted designs are based on this characteristic of
control: (1) preexperiments: After-only study, One-goup pretest-posttest design,
Static group comparison; (2) true experiments: Pretest-posttest control group
design, Posttest-only control group design; (3) field experiments (quasi- or semi): nonequivalent control group design, Separate sample pretest-posttest design,
Group time series design.
Validity of findings: Mechanism to check whether results are true and whether a
measure accomplishes its claims. Internal Validity: Checking whether he
conclusions we draw about a demonstrated experimental relationship truly imply
cause. Threats to Internal Validity: HistoryMaturationTesting
Instrumentation Selection Statistical regression Experimental mortality
External validity: Does an observed causal relationship generalize across persons,
settings, and times. Threats to External Validity: Reactivity of testing on X
Interaction of selection and X Other reactive factors.
Variables in Research: Refer cheatsheet of Unit I.
Measurement and scaling: Measurement in research consists of assigning
numbers to empirical events, objects or properties, or activities in compliance with
a set of rules. 3-Step Process of Measurement: 1. Selecting observable empirical
events. 2. Developing a set of mapping rules: a scheme for assigning numbers or
symbols to represent aspects of the event being measured. 3. Applying the
mapping rule(s) to each observation of that event. Variables being studied in
research may be classified as objects or as properties. Objects include the concepts
of ordinary experience, such as tangible items like furniture. Properties are the
characteristics of the object. Mapping rule assumptions for Measurement Scales:
1. Numbers are used to classify, group, or sort responses. 2. Numbers are ordered.
3. Differences between numbers are ordered. 4. The number series has a unique
origin indicated by the number zero. Different Scales: 1) Nominal: Just a
classification but no order, distance, or natural origin (e.g., Gender) 2) Ordinal:
Classification and order but no distance or natural origin (e.g., Rice Variety) 3)
Interval: Classification, order, and distance, but no natural origin (e.g.,
Temperature) 4) Ratio: Classification, order, distance, and natural origin (e.g., Age
in Years). Construction of Measurement: is based on the following questions: 1) Is
distribution expected to be normal?; 2) What is my expected sample size?; 3) How
many groups will be compared?; 4) Are groups related or independent?
Sources of Error in Measurement: 1) The respondent; 2) Situational factors; 3) The
measurer; 4) The instrument. Characteristics of good measurement: Validity;
Reliability; Practicality. Validity & Reliability of Instrument: Validity: Content;
Criterion-Related; Concurrent; Predictive; Construct. Reliability: Stability;
Equivalence; Internal Consistency.

Vous aimerez peut-être aussi