Vous êtes sur la page 1sur 13

WHAT ARE EXAMPLES OF VARIABLES

IN RESEARCH?
OCTOBER 22, 2012 REGONIEL, PATRICK A. 35 COMMENTS

In the course of writing your thesis, one of the first terms that you encounter
is the word variable. Failure to understand the meaning and the usefulness
of variables in your study will prevent you from doing good research. What
then are variables and how do you use variables in your study? I explain the
concept below with examples of variables commonly used in research.
You may find it difficult to understand just what variables are in the context
of research especially those that deal with quantitative data analysis. This
initial difficulty about variables becomes much more confusing when you
encounter the phrases dependent variable and independent variable as
you go deeper in studying this important concept of research as well as
statistics.
Understanding what variables mean is crucial in writing your thesis
proposal because you will need these in constructing yourconceptual
framework and in analyzing the data that you have gathered. Therefore, it is
a must that you should be able to grasp thoroughly the meaning of variables
and ways on how to measure them. Yes, the variables should be measurable
so that you will be able to use your data for statistical analysis.
I will strengthen your understanding by providing examples of phenomena
and their corresponding variables below.

Definition of Variables and Examples


Variables are those simplified portions of the complex phenomena that you
intend to study. The word variable is derived from the root word vary,
meaning, changing in amount, volume, number, form, nature or type. These
variables should be measurable, i.e., they can be counted or subjected to a
scale.
The following examples of phenomena from a global to a local perspective.
The corresponding list of variables is given to provide a clear illustration of

how complex phenomena can be broken down into manageable pieces for
better understanding and to subject the phenomena to research.

Phenomenon: climate change

Examples of variables related to climate change:


1.
sea level
2.
temperature
3.
the amount of carbon emission
4.
the amount of rainfall

Phenomenon: Crime and violence in the streets


Examples of variables related to crime and violence:
1.
number of robberies
2.
number of attempted murders
3.
number of prisoners
4.
number of crime victims
5.
number of laws enforcers
6.
number of convictions
7.
number of car napping incidents

Phenomenon: poor performance of students in college entrance exams


Examples of variables related to poor academic performance:
1.
entrance exam score
2.
number of hours devoted to studying
3.
student-teacher ratio
4.
number of students in the class
5.
educational attainment of teachers
6.
teaching style
7.
the distance of school from home
8.
number of hours devoted by parents in providing tutorial support

Phenomenon: Fishkill
Examples of variables related to fish kill:
1.
dissolved oxygen
2.
water salinity
3.
temperature
4.
age of fish
5.
presence or absence of parasites
6.
presence or absence of heavy metal

7.

stocking density
Phenomenon: Poor crop growth

Examples of variables related to poor crop growth:


1.
the amount of nitrogen in the soil
2.
the amount of phosphorous in the soil
3.
the amount of potassium in the ground
4.
the amount of rainfall
5.
frequency of weeding
6.
type of soil
7.
temperature

Student Learning Outcomes


At the completion of this unit of instruction students will be able to:
1. Distinguish between examples of the four basic types of measurement validity
(Logical, content, criterion, and construct.)
2. Distinguish between reliability and validity (Accuracy of the measurement
versus the repeatability.)
3. Identify examples of data that are typically found on nominal, ordinal, interval,
and ratio scales and which types of data are used in parametric and nonparametric statistics (names, rank orders, etc.)
4. Identify examples of data that are typically found on Likert, Semantic
Differential, Thurstone, and Rating scales (Likert = agreement, Semantic
Differential = bipolar adjectives, etc.)
5. Understand two ways in which it is possible to objectively assess the validity of
measurements of knowledge (Item difficulty and item discrimination.)
Measurement is a research tool and also a research area.
Remember that scientific problem solving involves four steps (1. Developing the
problem; 2. Formulating the hypotheses; 3. Gathering the data; 4. Analyzing and
interpreting results.)
Step 3 necessitates an understanding of measurement

Remember we discussed concerns related to internal and external validity. We have


similar concerns about the validity of our measurements. Specifically, "Does the test
or instrument measure what it is supposed to measure?"
Q: What is reliability?
A: The consistency or repeatability of a measure
For example, if I use the measurement twice (e.g. take a test twice) would my scores
be the same?
Returning to the different types of validity distinguished in the text...
Four basic types of measurement validity
1. Logical validity
2. Content validity
3. Criterion validity (concurrent and predictive)
4. Construct validity
Logical Validity is also referred to as face validity. Does the measure obviously
measure the intended performance. A pull-up test obviously measures pull ups but is it
a valid measure of strength? Is the frequently used maximum bench press a valid
measurement of strength? Does my tennis skills test measure tennis skill? Often
measurements are difficult to justify on the basis of logical validity.
Content Validity is of great interest to you as students in this class. You are obviously
concerned that the assignments given to you in this class, and the midterm, and final
examinations cover the content of the class and are a valid representation of your
learning.
Criterion Validity involves measurements that can be validated against some criterion.
Concurrent validity exists when a test that can easily be administered is validated by a
high correlation with another (often difficult to administer) test that is know to be
valid. For example, in running the change of the body's use of oxygen as the principle
fuel to the use of carbohydrates is associated with the anaerobic threshold. This
particular change could probably be most accurately measured with blood samples.
However, taking blood from athletes is not very convenient so instead we measure the

heart rate. We know as the result of measurement research that the HR is a valid way
of predicting the change in energy sources.
The same would be true with a tennis test. The most valid way of assessing playing
ability would perhaps be to have skilled observers watch players and give ratings. A
tennis test that any coach could administer would be much more convenient.
However, in creating this test we would be wise to be sure that the results compared
similarly with the observers' ratings. Having found that concurrent validity exists we
could claim that our test was valid.
Predictive validity refers to the validity of a measurement to be used for the prediction
of future performance. The GRE is for example is used as a measure of predicting
future college success. Maybe health educators would like to predict the likelihood of
future drug dependency of elementary aged children. To attempt to find a valid
measure we would need to select several criteria then through the use of correlational
statistics examine the relationship. The major question (that statistics help answer) is
whether our measure has much predictive value.
If successful in finding measures that are proved to valid in terms of prediction, we
would like to think that our measure could be used elsewhere. What happens however
is that the validity tends to reduce when used with a different sample. This
phenomenon is know as shrinkage.
Construct validity is a concern when we attempt to measure something that is not
observable but which we attempt to infer. We do this all the time with concepts such
as intelligence, anxiety, arousal, learning, attitude, etc. To validate tests of these
variables we again need some type of comparison and usually in relation to an
observable behavior. For example, past assessments of a person's teaching
effectiveness have often been made by observers simply watching a teaching episode,
taking notes, and writing up a critique. Such assessments may or may not be valid.
They tend to be very subjective and often two observers will focus on different
aspects thereby producing sometimes contradictory evidence. More recently, lots of
assessment tools have been developed that direct observers to record specific,
observable behaviors, e.g. time spent organizing, time spent managing, # of feedback
statements, # of student names used. Based on these types of observations we attempt
to infer "teaching effectiveness" even though teaching effectiveness is not any clearly
observable behavior.
Measurement Reliability
Be sure you can distinguish between validity and reliability. Validity is whether or not
a measurement is really measuring the item of interest. In contrast, reliability focuses

on the consistency of the measurement. If a measurement is reliable you should get


the same results if you repeat it.
With any measurement the score you get is the observed score. This score is a
combination of the true score and error score. As researchers we would of course like
to eliminate or at least minimize the error score.
Four sources of measurement error include:
1. Subjects - variations in their mood, physical condition, mental state, motivation
2. Testing - poor directions, different expressions of interest or attempts to motivate
3. Scoring - use of inexperienced scorers, errors in recording
4. Instrumentation - inaccuracies, poor tests, calibration
We can establish the reliability of a measurement by various statistical techniques all
of which attempt to see the extent to which similar results can be obtained twice. One
obvious method is the same day test-retest in which the same subjects are tested twice
on the same day and the results compared. This methods works best when the quality
being measured is unlikely to be influenced by exposure to the test. In other words,
there is a question whether any learning might occur that would influence a person's
response the second time around.
We need to remember that all tests (measurements) are likely to include some degree
of error and when appropriate calculate data expressing the error. We also need to be
careful to minimize this error to the greatest extent possible.
Four Types of Measurement Scales
When researchers construct tests, they have to first decide on the appropriate scale of
measurement. In the text four scales are presented.
1. Nominal - classification by name (males/females, teenagers/adults are examples of
categories). Can also be formed on the basis of some measurement criterion (high/low
achievers, high/low skilled, although these are to some extent also of an ordinal
nature). The purpose of the scale is just for identification.
2. Ordinal - provides a rank order, e.g. percentile or a numbered list of students in
order of achievement on some measure. However, knowing the ranking of a score
doesn't provide information of differences between scores. In other words, knowing

that Sally was first in a quiz and Bill was second gives the rank order but not any
measurement of the difference between Sally and Bill's scores.
3. Interval - this measure provides both an order and shows the size of the difference
between scores. For example if we count the number of pull ups students can do we
can determine a rank order and also know that 2 pull ups is twice as many as 1.
4. Ratio - these scores have similar qualities to others but also have a true zero. Force,
time, and distance all have zero points.
Standard Scores
Often we want to compare scores on one measure to scores on a different measure.
This is impossible to do unless we can convert the two scores to a similar scale. In
other words you can't compare apples and oranges unless you convert the apple to the
orange - that sounds more confusing than I anticipated!
Anyway, there are ways to convert scores and the z score and T scale represent
commonly used standard scores.
Measuring Movement
As noted in the text, in PE especially we are often concerned with measuring
movement. The main point to appreciate is that it is important that we should always
be concerned about the validity and reliability of such measures.
Measuring Written Responses
Another common area of PEHL measurement. We always seem to be interested in
items such as attitudes, self-concept, anxiety, stress, motivation, communication and
so on.
The biggest problem faced in this area is defining the behavior we wish to measure
and then producing a valid and reliable measure. The main difficulty is that the quality
described by these words (attitude, self-concept, etc.) is rather intangible. In fact it
only exists to the extent that we define it. If we change our definition we have in
effect changed the quality itself!
It is for these reasons that I strongly advise graduate students to use preexisting
measures that have been proven to be somewhat valid and reliable whenever possible,
rather than attempt to design their own measurements.

Lots of energy has been devoted to research on attitudes and personality. Do athletes
have special personality characteristics in comparison to non-athletes? Does athletics
develop these characteristics or are certain personalities attracted to athletics? Are
there differences between sports? Do sports develop character or characters? You can
apply these and lots more examples to your own area of specialization.
Four commonly used measurement scales include the:
Likert Scale
Semantic Differential
Thurstone-type Scale
Rating Scales
The Likert Scale typically involves a 5-7 point scale on which subjects respond
according to levels of agreement. For example:
"I have enjoyed and learned a lot from participating in my graduate research methods
class."
|_________________|____________|_________|______________|
Strongly Agree Agree Undecided Disagree Strongly Disagree
This scale give a wider choice of expression than just yes/no
The Semantic Differential Scale uses bipolar adjectives (e.g. beautiful-ugly, skilledunskilled, supportive-critical), at the end of a 7 point scale. Subjects score 7 for the
most positive and 1 for the least positive.
In the Thurstone Scale subjects express agreement or disagreement with a written
statement. For example:
"PEHL 557 should be a 4 credit class."
These are harder to construct because they involved the use of judges in weighting
each statement for use in scoring.
Rating Scales are frequently used in research (e.g. Borg's Rating of Perceive Exertion,
ALT-PE scales, and many more). As pointed out in the text, when "experts" are
involved in ratings various types of inconsistencies sometimes emerge.

Leniency = overgenerous
Central tendency = tendency to grade everyone as average
Halo effect = use of prior knowledge about a subject can influence judgment
Proximity errors = is concerned with the location of the rating criteria on the rating
sheet
Observer bias = personal biases the judges may have
Observer expectation = based on a person's knowledge of the experimental
arrangements the rater may exhibit different expectations.
If possible it is better to create evaluation devices that reduce the need for subjective,
value judgments and increase objective measurements. This trend had occurred in
evaluating teaching effectiveness. For example, we now will count specific behaviors
exhibited by teachers rather than try to judge whether the behavior is good or bad. If I
told you that you said "um" forty times during your 5-minute presentation you would
probably conclude a need to improve communication without me having to say that I
think your communication skills rate a 3 on a 5-point rating scale.
Measuring Knowledge
Whenever we take tests or give our students tests we would like to believe that the
questions we are posing a valid measurements of their knowledge. Sometimes
subjects don't have the opportunity to express their concerns to the test creator.
Fortunately, it is possible for test creators to objectively evaluate the validity of their
own measurements.
Item difficulty is a way of assessing the value of a question. If everyone answers a
question correctly, the thought arises as to whether there is any point including the
question as a measure. Think about this...maybe we actually do want everyone to
answer the question correctly...or maybe we want to differentiate between the level of
knowledge of our students.
Anyway, as explained in the text we can calculate a difficulty index and learn that
many test makers will eliminate questions with a difficulty index below .10 or above .
90

Item discrimination is a way of learning how well our tests discriminate between high
achievers and low achievers. Many test makers strive for discrimination indexes of .20
or higher for each question.
Selecting the correct type from the different research methods can be a little
daunting, at first. There are so many factors to take into account and evaluate.
The research question, ethics, budget and time are all major considerations in anydesign.
This is before looking at the statistics required, and studying the preferred methods for the individual
scientific discipline.
Every experimental design must make compromises and generalizations, so the researcher must try
to minimize these, whilst remaining realistic.
For pure sciences, such as chemistry or astrophysics, experiments are quite easy to define and will,
usually, be strictly quantitative.
For biology, psychology and social sciences, there can be a huge variety of methods to choose from,
and a researcher will have to justify their choice. Whilst slightly arbitrary, the best way to look at the
various methods is in terms of strength.

Experimental Research Methods


The first method is the straightforward experiment, involving the standard practice of manipulating
quantitative, independent variables to generate statistically analyzable data.
Generally, the system of scientific measurements is interval or ratio based. When we talk about
scientific research methods, this is what most people immediately think of, because it passes all of
the definitions of true science. The researcher is accepting or refuting the null hypothesis.
The results generated are analyzable and are used to test hypotheses, with statistics giving a clear
and unambiguous picture.
This research method is one of the most difficult, requiring rigorous design and a great deal of
expense, especially for larger experiments. The other problem, where real life organisms are used, is
that taking something out of its natural environment can seriously affect its behavior.
It is often argued that, in some fields of research, experimental research is too accurate. It is also
the biggest drain on time and resources, and is often impossible to perform for some fields, because
of ethical considerations.
The Tuskegee Syphilis Study was a prime example of experimental research that was fixated on
results, and failed to take into account moral considerations.

In other fields of study, which do not always have the luxury of definable and quantifiable variables you need to use different research methods. These should attempt to fit all of the definitions of
repeatability or falsifiability, although this is not always feasible.

Opinion Based Research Methods


Opinion based research methods generally involve designing an experiment and collecting
quantitative data. For this type of research, the measurements are usually arbitrary, following the
ordinal or interval type.
Questionnaires are an effective way of quantifying data from a sample group, and testing emotions
or preferences. This method is very cheap and easy, where budget is a problem, and gives an
element of scale to opinion and emotion. These figures are arbitrary, but at least give a directional
method of measuring intensity.
Quantifying behavior is another way of performing this research, with researchers often applying a
numerical scale to the type, or intensity, of behavior. TheBandura Bobo Doll experiment and
the Asch Experiment were examples of opinion based research.
By definition, this experiment method must be used where emotions or behaviors are measured, as
there is no other way of defining the variables.
Whilst not as robust as experimental research, the methods can be replicated and the
results falsified.

Observational Research Methods


Observational research is a group of different research methods where researchers try to observe a
phenomenon without interfering too much.
Observational research methods, such as the case study, are probably the furthest removed from
the established scientific method. This type is looked down upon, by many scientists, as quasiexperimental research, although this is usually an unfair criticism. Observational research tends to
use nominal or ordinal scales of measurement.
Observational research often has no clearly defined research problem, and questions may arise
during the course of the study. For example, a researcher may notice unusual behavior and ask,
What is happening? or Why?
Observation is heavily used in social sciences, behavioral studies and anthropology, as a way of
studying a group without affecting their behavior. Whilst the experiment cannot be replicated
or falsified, it still offers unique insights, and will advance human knowledge.

Case studies are often used as a pre-cursor to more rigorous methods, and avoid the problem of the
experiment environment affecting the behavior of an organism. Observational research methods are
useful when ethics are a problem.

Conclusion
In an ideal world, experimental research methods would be used for every type of research, fulfilling
all of the requirements of falsifiability and generalization.
However, ethics, time and budget are major factors, so any experimental design must make
compromises. As long as a researcher recognizes and evaluates flaws in the design when choosing
from different research methods, any of the scientific research methods are valid contributors to
scientific knowledge.

July18

Data Collection Methods

Research can be defined as systematic investigation that contributes to knowledge. In this


process data collection has significant role. Data means information which helps researcher to
achieve research objective. The quality of research largely depends on collected information. The
more reliable data leads to more trustworthy research. This is dependent on data collection method
that researcher selects to achieve the objective. Data helps the researcher in decision making act. We
hope that this post would be a good dissertation help for our students in assisting them with
collection of data. There are two types of data and their collection methods are as followed:

Primary data: Data which are collected by researcher itself, said to be primary data. This data
collection method is more authoritative as it is not collected by third party. This data provide raw
form of information that can be tailored according to the need of researcher. As this data is collected
by researcher, this makes it more expensive and time consuming than secondary method (Boba,
2005).
This
includes:

Observation method: This method is concerned with behavior. In this method behavior is
recorded and observed systematically by researcher. Researcher can gather detailed information by
this but its also time consuming method. In this method, researcher does not change behavior but
record
it as
it
occurs.
This
method
is not
flexible
as
survey
method.
Questionnaire: This method used in survey. In this method, research related question format
distributed by mail or internet. By this method, researcher is able to collect data from wide
geographical area. This method is cost effective and easy to manage but also it is time consuming.
Questionnaire can be open-ended and close ended. In open ended questionnaire, alternative
responses not mention and in other alternative responses are provided to respondents. This method
requires
only
literate
respondents
so
it
creates
barriers
for
this.

Interview: In this data collection method, researcher collects data by communication with
respondents. It can be through personal meeting or vie telephone. Interview format depends upon
information quality and quantity of data that researcher requires for research. Researcher must clear
about purpose of research before designing the interview question. Each question relates to research
problem. Through this method, research able to access nonverbal behavior and gets immediate
feedback. On the other hand, only small number of respondents can access and also it is time
consuming
method.
Secondary data: Data which are not collected or gathered by researcher himself or herself, its
termed as secondary data. This type of data has previously collected by someone else for some other
purpose (Hodges and Videto, 2005). There are two benefit of this data collection method. This
method is less expensive and less time consuming. Through this method, data can be obtained easily
and quickly but its not authoritative. On the other hand, this data is not fit according to the need of
researcher as it is collected by third party for their own purposes. Books and periodicals,
Government sources, Regional Publications, commercial sources, media sources and selected
internet sites that provide financial data are some example of secondary data sources (Zikmund,
2009).

Vous aimerez peut-être aussi