Vous êtes sur la page 1sur 39

Group 3 | 12-Maxwell

Mesa, Allan
Paguia, Marion
Dionaldo, Marinelle
Hermosa, Samantha
Villamer, Louise
The learner must be able to:
• Describe adequately quantitative research designs, sample,
instrument used, intervention, data collection and analysis
• Be equipped with sampling techniques and procedures
• Gain knowledge in planning the initial processes in the
methodology
• Identify the possible instruments to be utilized
• Improve skills in writing the methodology
• Descriptive Design
QUANTITATIV • Correlational Design
E RESEARCH • Quasi-Experimental

DESIGNS
Design
• Experimental Designs
Descriptive Design
seeks to describe the current status
of a variable or phenomenon. The
researcher does not begin with a
hypothesis, but typically develops
one after the data is collected. Data
collection is mostly observational
in nature.
Correlational Design
explores the relationship
between variables using
statistical analyses. However, it
does not look for cause and
effect and therefore, is also
mostly observational in terms of
data collection.
(often referred to as Causal-
Comparative) seeks to establish a
cause-effect relationship
between two or more variables.
The researcher does not assign
groups and does not manipulate
the independent variable.
Control groups are identified
and exposed to the variable.
Results are compared with
results from groups not exposed
to the variable.

Quasi-Experimental Design
EXPERIMENTAL Design
often called true experimentation, use the scientific method to establish
cause-effect relationship among a group of variables in a research study.
Researchers make an effort to control for all variables except the one being
manipulated (the independent variable). The effects of the independent
variable on the dependent variable are collected and analyzed for a
relationship.
Historical Research
Design
OTHER • The purpose of this
research is to collect,
RESEARCH verify and synthesize
DESIGNS evidence from the past
to establish facts that
defend or refute your
hypothesis.
sampling is the selection
of a subset of individuals
from within a statistical

SAMPLING
population to estimate
characteristics of the
whole population.
Theory on sampling
1. Researchers want to gather information about a whole
group of people (the population).
2. Researchers can only observe a part of the population
(the sample.)
3. The findings from the sample are generalized, or
extended, back to the population.
WHY USE A SAMPLE?
1.Time
2.Cost
3.Coding
4.Availability
Considerations in selecting sample
1. Select the unit of analysis
2. Determine how many units need to be sampled
1.In general, descriptive designs require at least 100
participants, correlational designs require at least 30
participants, and experimental, quasi-experimental,
and causal-comparative designs require at least 15
participants per group.
Simple random
Stratified random
SAMPLING Proportionate random
Purposive
procedures Multi-stage
Simple random
every individual in the target population has
an equal chance of being part of the sample.
This requires two steps:

1. Obtain a complete list of the population.


2. Randomly select individuals from that list
for the sample.
Stratified random
The researcher divides the population into
different subgroups. These subgroups are called
stratum(pl. strata). The researcher then selects
the same number of people from each strata.
Proportionate random
The researcher divides the population
into different subgroups. These
subgroups are called stratum(pl. strata).
The researcher then selects a certain
number of people from each strata. The
certain number is based on the
proportions of the sizes of the strata.
Purposive
The researcher uses their expert judgment to select
participants that are representative of the population. To do
this, the researcher should consider factors that might
influence the population: perhaps socio-economic status,
intelligence, access to education, etc. Then the researcher
purposefully selects a sample that adequately represents the
target population on these variables.
MULTI-STAGE
Instruments
- are the data gathering devices that will be used
in the study. It is a testing device for measuring a
given phenomenon, such as a paper and pencil test,
questionnaires, interviews, research tools, or set of
guidelines for observation.
Categories of Instruments
Research Completed Instruments Subject Completed Instruments

Rating Scales Questionnaires

Interview schedules/guides Self-checklists

Tally sheets Attitude scales

Flowcharts Personality inventories

Performance checklists Achievement test/aptitude tests

Time and motion logs Projective devices

Observation forms Sociometric devices


Validity
- refers to the extent to which the
instrument measures what it intends to
measure and performs as it is designed to
perform.
Types of Validity
• Content Validity - the extent to which a research
instrument accurately measures all aspects of a
construct.

• Construct Validity – the extent to which a research


instrument or tool measures the intended construct.

• Criterion Validity – the extent to which a research


instrument is related to other instruments that measure
the same variables.
Reliability
- relates to the extent to which the instrument
is consistent. The instrument should be able to
obtain approximately the same response when
applied to respondents who are similarly situated.
Attributes of Reliability
• Internal Consistency/Homogeneity – the extent to
which all the items on a scale measure one construct.

• Stability or Test-Retest Correlation – the consistency of


results using an instrument with repeated testing.

• Equivalence – consistency among responses of multiple


users of an instrument, or among alternate forms of an
instrument.
Sources of Data
• Primary Sources – known as primary data/raw data. These
are data obtained from your own researchers, surveys,
observations and interviews.
• Secondary Sources – known as secondary data. These are
data obtained from secondary sources such as reports,
books, journals, documents, magazines, internet and more.
Describing an Intervention
Research
• What Is Intervention Research?
• Intervention research is the systematic study of
purposive change strategies. It is characterized by
both the design and development of interventions.
The first step in the intervention research
process involves defining the problem and
developing a program theory.

Steps in
Step 2 is devoted to the design of the
intervention. The intervention may derive
from new and creative work by practitioners.

Intervention At Step 3, different components of the

Research intervention are tested

The core idea of effectiveness studies is to


(Fraser, Richman, estimate a treatment effect when a program
Galinsky, & Day, 2009) is implemented

Practitioners may conclude that some parts


of an intervention are useful whereas other
parts are not.
Data Collection
• Data Collection is an important aspect of any type of
research study. Inaccurate data collection can
impact the results of a study and ultimately lead to
invalid results.
Steps in Data Collection
• Step 1: Identify issues and/or opportunities for collecting
data
• Step 2: Select issue(s) and/or opportunity(ies) and set goals
• Step 3: Plan an approach and methods
Data Collection
• The Quantitative data collection methods, rely on random
sampling and structured data collection instruments that fit diverse
experiences into predetermined response categories. They produce
results that are easy to summarize, compare, and generalize.
Typical quantitative data gathering strategies include:
• Experiments/clinical trials.
• Observing and recording well-defined events (e.g., counting
the number of patients waiting in emergency at specified times of
the day).
• Obtaining relevant data from management information
systems.
• Administering surveys with closed-ended questions (e.g., face-
to face and telephone interviews, questionn
Data Collection
• Qualitative data collection methods play an important role
in impact evaluation by providing information useful to
understand the processes behind observed results and
assess changes in people’s perceptions of their well-being.
• The qualitative methods most commonly used in
evaluation can be classified in three broad categories:
• indepth interview
• observation methods
• document review
Steps in Data Collection
• Step 4: Collect data
• Step 5: Analyze and interpret data
• Step 6: Act on result
WRITING OF
METHODOLOGY
Participants
Describe the participants in your research study, including who they
are, how many there are, and how they are selected. Explain how the
samples were gathered, any randomization techniques and how the
samples were prepared.

Examples:
• The researchers randomly selected 100 children from elementary
schools of Cebu City
Materials
Describe the materials, measures, equipment, or stimuli used in your
research study. This may include testing instruments, technical
equipment, books, images or other materials used in the course of
your study.
Examples:
• Two stories from Sullivan et al.’s (1994) second-order false belief
attribution tasks were used to assess children’s understanding of
second-order beliefs.10 kg cigarette butts collected from a certain
region
• Oven, bitumen, cigarette butts, compressor
Design
Describe the research design used in your research study. Specify the
variables as well as the levels and measurement of these variables.
Explain whether your research study uses a within groups or
between-groups design. Discuss how the measurements were made
and what calculations were performed upon the raw data. Describe
the statistical techniques used upon the data.
Examples:
• The experiment used a 3x2 between-subjects design. The
independent variables were age and understanding of second-order
beliefs.
Procedure
The detail of the research procedures used in your research study
should be properly explained. Explain what your
participants/respondents do, how you collected the data, the order in
which steps occurred. Observe some ethical standards in gathering
your data.
Examples:
• A researcher interviewed children individually in their school in
one session that lasted 20 minutes on average. The researcher
explained to each child that he or she would be told two short
stories and that some questions would be asked after each story. All
sessions were videotaped so the data could later be coded.
TIPS IN WRITING
METHODOLOGY
• Always write the method • Remember to use proper
section in the past tense. format.
• Provide enough details • Take a rough draft of
that another researcher your method section
could replicate your with your teacher or
experiment, but focus on research adviser for
brevity. Avoid additional assistance.
unnecessary detail that is • Proofread your paper for
not relevant to the
typos, grammar
outcome of the
problems, and spelling
experiment.
errors.

Vous aimerez peut-être aussi