Vous êtes sur la page 1sur 12

Module 3

Chapter 6: Key Principles of Quantitative Designs (159-166)


What Is Validity?
Refers to the ability to accept results as logical, reasonable, and justifiable based on the evidence
presented
o Truth or accuracy
Internal Validity
o The degree to which one can conclude that it was the IV (independent variable) not
extraneous variables that produced the change in the DV
o 7 Types:
Selection Bias
When the change in the DV is result of differences in characteristics of
subjects BEFORE they entered the study
Can be minimized by random assignment
History
When the DV may have been influenced by some event other than the IV that
occurred during the study
Can be fixed by including a control group that was exposed to the historic
event (Such as PSA or TV ads) but that did not receive the intervention
Maturation
Subjects may change over course of study either by growing or by becoming
more mature
Effect DV when study is over time
Control group helps limit this threat
Testing
Pretest influences the way subjects respond to the posttest
Reflect memorization not current knowledge/beliefs
Instrumentation
Changes made in the way variables are measured
Ex: original measurement of bp with manual then but later measuring with
automatic machine
Or when data collected by observation/interview use different data collectors
To control this: all data collectors are comprehensively trained
Researcher also evaluate interrater reliability to determine consistency among
individuals collecting data
Mortality
Loss of subjects before study is complete
Threat if there is a difference in characteristics of those who dropped out &
those who completed study
Increases longer study lasts
In health-related research emotions influence drop out as well as physical
well-being
Attrition rate = dropout rate
If attrition is high researcher should provided analysis & explanation as to why
Statistical conclusion validity
The confidence that results of the statistical analysis accurately reflect true
relationship between the IV & DV
Doesnt happen when researchers make Type II error
Type II occur when researchers inaccurately conclude that there is no
relationship between the IV & DV when actual relationship does exist
o More likely to occur in small sample size
In evaluating Statistical Conclusion Validity consider the info presented in
methods section that details instrument reliability
o Low reliability of measures is factor that can interfere in researcher
ability to draw conclusions

2015 Jones & Bartlett Learning

Can control low reliability of measures by using well established & designed
instruments

External Validity
o Refers to the degree to which the results of the study can be generalized to other subjects,
settings, and times
o 5 Threats
Construct validity
Determines whether instruments are measuring the theoretical cause or effect
concepts that are intended to be measured
Can lead to bias or unintentional confounding of results
Bias: systematic error in subject selection, measurement of variables or
analysis
o Ex. Studying effect of anger on blood pressure, but instrument used to
measure anger also measures depression
o Methods section describes of validity of instrument is established
Confounding means possible error in interpretation of results
o Can occur when experimental controls do not allow the researcher to
eliminate possible alternative explanations for the relationship
between IV & DV. Two types:
Subject reactivity
Subjects are influenced by participating in the study
Changes noted in DV result of subject reactivity
Known as Hawthorne effect
Behavior of subjects may be affect by personal values,
desire to please experimenter, provide the results
experimenter wants, & congruence w/ personal interests
& goals
Experimenter reactivity
When experimenters have expected or desired outcomes
they may inadvertently affect how interventions are
conducted & how they interact w/subjects
Double-blind: controls for threats of reactivity. Neither subjects nor individuals
administering treatments know whether subjects are receiving experimental
interventions or standard of care (placebo pills)
Effects of Selection
Must be representative of the entire population
Effects of selection limit how the study can be generalized
EX. Researcher interviewing mothers. No child care provided & interviews are
during day. So women who work during the day arent represented. So can
this study be generalized to all women?
Interaction of treatment & selection of subjects
Requires consideration of difference between the accessible population &
target population of interest
Can a study done on one sample be generalized to the population as a whole
o Ex: Condom use study: target population = all sexually active teens.
Accessible population = group of teens researcher could obtain from
Midwestern suburban high school. Can this be generalized to all teens
in the urban west or rural south?
Interaction of treatment and Setting
Concerned with whether results from an intervention conducted in one setting
can be generalized to another setting where the same intervention is used
o EX. Condom use study: can it be generalized to all teens in high school
setting if sample was teens waiting in a family planning clinic
Interaction of treatment & History
Concerned with how the effects from the intervention might be changed by
events occurring in the past or future

2015 Jones & Bartlett Learning

Ex: Researcher finds intervention increases condom use, would that be


generalized in the future if cure for HIV was found?
Chapter 10: Collecting Evidence (263-282)
Data Collection: Planning and Piloting
Planning for Data Collection
o Plan from consent of subjects obtained to actual completion of data collection period
o Timeline & comprehensive budget (Includes: salaries, mileage, meals, data collection
materials & instrumentation, recruitment fees, etc) included
o Begin by determining data type needed (when, how, who, what, instruments used)
o Factors that affect data collection: availability of instruments, mobile devices, data collection
needed, sample size, personnel needed
o Budget considerations are key
Piloting Data Collection Methods
o Pilot study: scaled-back version of data collection method
o Helpful in evaluate instruments, devices, and process to find unexpected problems
o Confirms feasibility & allows revisions before actual study
Collecting Quantitative Data
Collecting Numbers
o Quantitative methods used to test stated hypotheses & call for researchers to use formal,
objective, and systematic procedures & instruments that produce numerical data
o Highest level of evidence on which clinicians can base EBP decisions
o Meta-analyses = strongest level of evidence vs. studies based on case reports or opinions
are lowest level
o Methods of data collection for quantitative research:
Questionnaires
Inexpensive way to gather numerical data from potentially large # of
respondents
Can be expensive in terms of design time & interpretation
Formating & length are important
o Only essential questions
o Balance of positive & negative questions to decrease biased responses
Each subject has research ID # for use on questionnaire
Variety of ways to distribute questionnaire
Confidentiality is necessary
Return rates increase w/cover letter or brief description of purpose
More likely to participate if perceive benefit to self or society
Low response can lead to bias b/c no representative
Observation
Quantify an explicit feature of the phenomenon under observation
Researchers = objective observers & follow systematic method
Establishing detailed protocol is extremely important if researchers use
research assistants
Scales
Used to assign a numeric value or score along a continuum
Many developed to measure social & psychological concepts
Ideally better to use previously tested scales
Can measure single or multidimensional concept
Likert scales are commonly used (7 points on continuum)
Visual Analog Scale: measure the intensity of sensations & feelings (pain
scale)
Physiological measures
Wide range of biological, chemical, and microbiological data
o Biological: BP, CO, Weight
o Chemical: electrolytes, hormones, & cholesterol
o Micro: bacterial counts

2015 Jones & Bartlett Learning

Accessible in most healthcare settings (easy collection w/minimal or no cost)


Researchers specify measurements protocols
Issues in Quantitative Data Collection
o Must have written plan outlining process for data collection for additional data collectors that
are employed
o Research assistants trained to collect data in consistent manner
o Interrater reliability must be established when more than one person is involved
Interrater reliability: extent to which 2+ individual raters agree
Monitored periodically throughout study to increase degree of confidence in data
o Data collection plans detail a time frame
Usually takes twice as long as planned due to:
Slow enrollment of consented subjects
Heavy workloads
Staff turnover
o Plan should include strategies to manage attrition of subjects as result of death, dropout, or
relocation
o Plan should address decisions about missing (ex. Subjects refusing to respond to particular
question)
o Studies often funded by federal grant, state grant, or private foundations
Budgeting is important
Time extensions may be requested
But if budget is cut research may be compromised
Levels of Measurement
o Measurement: process of assigning numbers using a set of rules
o Four categories to describe measurements (Aka. Levels of measurement):
Nominal
Weakest level of measurement
To classify or categorize variables
AKA Categorical data
Numbers assigned are just labels & dont indicate any value
EX: yes = 1 & no = 2
Often used in questionnaires to record fixed responses like gender, race, and
diagnosis
Dichotomous: only 2 possible fixed responses (true/false; yes/no;
male/female)
Ordinal
Second lowest level of measurement
Continuum of numeric values where small numbers represent lower levels on
continuum
Althought values are ordered & ranked intervals are not equal
Ex: In a race 1st, 2nd, & 3rd places dont have equal amount of time between
them
Questionnaires and scales use ordinal measurements
Interval
3rd level
Uses continuum of numeric values
AKA continuous data
Values have meaning & intervals are equal
On this scale zero point is arbitrary & not absolute (does not indicate true
absence of something)
Ex: Celsius scale; 0 does not mean absence of temp
Other exampls: intelligence measures, personality measures & manual muscle
testing
Ratio
Another way to collect continuous data

2015 Jones & Bartlett Learning

Highest level of measurement


Uses continuum of numeric values w/equal intervals & zero that is absolute
Age, weight, height, and income
VAS? Also provides ratio measurement along with other biochem &
physiological measures

Validity and Reliability


Measurement Error
o Researchs spend significant amount of time designing surveys & instruments to reduce
measurement error so they know measurements provide true reflection of sample
characteristics
o Goal is observed measurement to be as close to true measurement as possible
O=T+E
O = observed score (actual number obtained from instrument)
T = True score (actual amount of characteristic
O = T means we have a perfect instrument, but thats never the case
E = Error & always present in measurements
o Error can either be random or systematic
Random error: occurs by change
Difficult for researches to control b/c results from transient factors
Can be attributed to subject factors, instrumentation variations or
environmental factors
Ex: accidentally filling in wrong bubble on test (observed score/test score
indicates you didnt know content, but true score would indicate otherwise)
Systematic error: Same kind of error occurs repeatedly
Aka: consistent error
Can result from: subject, instrumentation & environmental factors
EX: measuring temp after an intervention, researchers assumes its measuring
accurately, but thermometer hasnt been calibrated. B/c every temp measure
is affected it is systematic error
o How do researchers know instruments are useful if error occurs in all measurements?
Methodological studies to test instruments
Psychometrics refers to the development of measures for psychological attributes
o Validity
When selecting instrument researchers must ask if it is valid
Validity: the degree an instrument measures what its supposed to measure
Three types of validity: content, criterion-related, & construct
Content validity: kind of validity to ensure that the instrument measures the
concept; researchers must clearly define concept; tested in 2 ways (face &
content)
o Face validity: a test for content validity when researchers ask
colleagues or subjects examine an instrument & are asked whether it
appears to measure the concept
Less desirable that content validity b/c uses intuitive approach
o Content validity: researchers give an instrument to a panel consisting
of experts on the concept & the experts judge the instrument by rating
each item for the degree to which it reflects the concept being
measured
High rated = kept & low rated = altered/eliminated
Criterion-related validity: degree to which the observed score & true score are
related. Tested two ways (concurrent & predictive)
o Concurrent validity: tested when researchers simultaneously
administer 2 different instruments measuring same concept
Usually a new instrument compared to an already valid
instrument
Use correlations to compare scores of two instruments (high
correlations indicate agreement between the two)

2015 Jones & Bartlett Learning

Predictive validity: refers to whether a current score is correlated with a


score obtained in the future
Ex: nursing students complete instrument measuring critical
thinking today & again 1 month from now, if instrument ahs
good criterion-related validity, scores will be correlated
Construct validity: focuses on theory. Constructs are theoretical concepts that
are tested empirically. 7 ways:
o Hypothesis testing: use theories to make predictions about concept
being measured
Ex. Predicting pain scores will be highest on surgery day &
gradually decrease. Construct validity of childrens pain scale is
supported b/c it coincides with predicted pain pattern
o Convergent testing: Researchers use 2+ instruments to measure same
theoretical component
EX: Comparing Oucher to VAS pain scale, pain ratings were
highly correlated & est. convergent validity
o Divergent testing: involves compairing scores from 2+ instruments
that measure different theoretical constructs
EX: researchers compare depression & happiness. Negative
correlation supports construct validity
o Multitrait-multimethod testing: Convergent & divergent combined
Helpful in reducing systematic error
o Known group testing: Instruments administered to individuals known to
be high or low on characteristic being measured
Researchers expect significantly different scores in the low &
high group
Oucher pain score example: tested known groups by comparing
scores of children with extensive surgeries to those with minor
procedures
o Factor analysis: most concepts have more than 1 dimension.
Dimensions are known as factors
A statistical approach to identify questions that group around
different factors
Items that group together have high correlations
Items that dont fit are altered or eliminated
B/c factor analyses require complex, simultaneous
computations of correlations: computers are needed
o

Reliability
o Reliability: instruments obtain consistent measurements over time
o Considered in relation to validity
o Instrument can be reliable but not valid (ex. Weighing yourself 10 times in a row on
bathroom scale this morning scale shows same weight each time. Scale is reliable, but if you
are anxious about your weight it doesnt measure anxiety. Not valid instrument to measure
anxiety)
o Estimates of reliability presented in form of correlation coefficient
+1 = perfect reliability
0 = absence of reliability
>.8 are acceptable for well-established instruments
>.7 are accepted for newly developed instruments
o Researchers interested in 3 attrbutes of reliability:
Stability: when the same scores are obtained w/repeated measures under the same
circumstances
Equivalence: agreement between alternate forms or alternate raters
Internal consistency (aka homogeneity): exists when all items on questionnaire
measure the same concepts
o 7 ways are commonly used to test intstruments for reliability

2015 Jones & Bartlett Learning

Test-retest reliability: New instrument is given at two different times under the same
conditions. Scores are correlated. Strong positive correlations indicate good reliability.
Determines stability
Parallel or Alternate: New instrument is given in two different versions. Scores are
correlated. Strong positive correlations indicate good reliability.
Stability & Equivalence determined
Interrater reliability: Two observers measure the same event. Scores are correlated.
Strong positive correlations indicate good reliability.
Determines equivalence
Split-half: The items are divided to form two instruments. Both instruments are given
and the halves are compared using the Spearman-Brown formula
Determines Internal consistency
Item to total: Each item is correlated to the total score. Reliable items have strong
correlations with the total score.
Determines Internal consistency
Kuder-Richardson coefficient: Used with dichotomous items. A computer is used to
simultaneously compare all items
Determines Internal consistency
Cronbachs alpha: Used with interval or ratio items. A computer is used to
simultaneously compare all items
Determines internal consistency
Appraising Data Collection in Quantitative Studies
o When reading methods determine that each instrument is described & the reliability &
validity are reported
o Level of measurement should be noted for each variable measured
o Appraise whether instruments represent the concepts & variables being operationalized
o Know details of pilot if done
o Many quantitative studies fall short of significant findings if there are holes in methods
section
o Study may be flawed if they lack validity & reliability

Module 4
Chapter 11: Using Samples to Provide Evidence
Fundamentals of Sampling
o Learning the Terms
Population: the entire group of elements that meet study inclusion criteria
Elements: basic unit of the population such as individuals, events, experiences, or
behaviors
Subjects: individuals who participate in studies, typically studies using quantitative
design
Sampling plan: plan to determine how the sample will be selected and recruited
Sample: select group of subjects that is representative of all eligible subjects
Target population: all elements that meet the study inclusion criteria
Accessible population: the group of elements to which the researcher has reasonable
access
o The Hallmark of a Sample: Representativeness
Representativeness: the degree to which elements in the sample are like elements in
the population
AKA: external validity
Important to ensure the results of a study can be generalized to the entire population
Greater concern in quantitative than qualitative
Inclusion criteria: characteristics that each element must possess to be included in
the sample
Exclusion criteria: characteristics of elements that will not be included in the sample
Use of exclusion criteria may decrease the risk of certain characteristics
influencing the results of a study

2015 Jones & Bartlett Learning

Must be clearly delineated, and have valid explanations as to why


Sampling error: occurs when subjects in a study do not accurately represent the
population
Commonly results due to small sample size
Sampling bias: a threat to external validity when a sample includes elements that
over or underrepresent characteristics when compared to elements in the target
population
Sampling Methods
o Probability Sampling Methods
Probability sampling: sampling method in which elements in the accessible
population have an equal chance of being selected for inclusion in the study
3 conditions:
1) accessible population must be identifiable
2) researcher must create a sampling frame; a list of all possible elements in
the accessible population
3) random selection must be used to select elements from the sampling frame
o Reduces threat of selection bias
4 types of probability sampling:
Simple random: randomly selecting elements from the accessible population
o Widely considered the best method to obtain a representative sample
Stratified random: selecting elements from a population that has been divided
into groups or strata
o Frequently based on what is already known about the phenomenon
being studied
o Strata must be mutually exclusive, and have a sufficient number of
elements
o Advantage: sampling error can be reduced by selecting strata that are
known to represent the population
o Decrease data collection time and costs
Cluster sampling (multistaging): random sampling method of selecting
elements from larger to smaller clusters or subsets of the population
o More effective for large populations
o Select a percentage of each group, not a set number to prevent over or
undersampling
Systematic sampling: every kth element is selected from a numbered list of
all elements in the selected population; the starting point is randomly selected
o Sampling interval (k): interval between each element when using
systematic sampling; remains constant between each element
o

2015 Jones & Bartlett Learning

2015 Jones & Bartlett Learning

Nonprobability Sampling Methods


Nonprobability sampling: sampling methods that do not require random selection of
elements
Less likely to be representative of the population
Used when sampling frame cannot be determined
4 types of non-probability sampling

Sampling Size: Does It Matter?


o Determining Sample Size
o Recruitment and Retention of Subjects
o Considerations for EBP
Keeping It Ethical
Chapter 13: What Do the Quantitative Data Mean? (367-373)
Reducing Error When Deciding About Hypotheses
o Type I and Type II Errors
Type I: rejecting null when it should have been accepted; saying a relationship exists
when it does not

2015 Jones & Bartlett Learning

TYPE I IS WORSE! It does more damage to say that a treatment works when it
doesnt
When interventions are complex, expensive, invasive, or have many side
effects, they are usually less willing to risk a type I error
When interventions are simple, inexpensive, or noninvasive, the researchers
tolerance for type I errors increases
Type II: accepting the null when it should have been rejected; saying there is not a
relationship when there is

Level of Significance: Adjusting the Risk of Making Type I and Type II Errors
Alpha (): probability of a type I error
.05 is the most commonly used in nursing research; 5 times out of 100 the
researcher would make a type I error
Adjust the level is how researchers change the probability of a type I and
therefore type II error- by inverse relationship
Designated at the end tail of a distribution
Beta (): probability of a type 2 error
Type I has an inverse relationship to type II; vice versa

Module 5
Chapter 12: Other Sources of Evidence (329)
Meta-analysis
A scholarly paper that combines results of studies, both published & unpublished into a measurable
format and statistically estimates the effects of proposed interventions
Can be conducted if body of reports is large and homogenous
A statistical procedure that involves quantitatively pooling data from a group of independent
studies that have studies the same or similar clinical problems using the same or similar research
methods
A pooled estimate of effect (called effect size (ES)) & a confidence interval (CI) are calculated
ES: estimated the strength of the relationship between two variables
CI: shows reliability of the estimate, in this case, effect size
Chapter 13: What Do the Quantitative Data Mean? (344-345)
Using Statistics to Describe the Sample
o Statistics: the branch of mathematics that collects, analyzes, interprets, and presents
numerical data in terms of samples and populations
o statistics: the numerical outcomes and probabilities derived from calculations on raw data
o descriptive statistics: collection and presentation of data that explain characteristics of
variables found in the sample
o inferential statistics: analysis of data as the basis for prediction related to the phenomenon
of interest
o population parameters: characteristics of a population that are inferred from the
characteristics of a sample
o sample statistics: numerical data describing the characteristics of a sample
o univariate analysis: the use of statistical tests to provide information about one variable
Symbol/abbreviation
f
M
Mdn
n
N
%
SD
z

2015 Jones & Bartlett Learning

Definition
Frequency
Mean
Median
Number in subsample
Total number in sample
Percentage
Standard deviation
A standard score

2015 Jones & Bartlett Learning

Vous aimerez peut-être aussi