Vous êtes sur la page 1sur 12

What do psychologists have to think about when designing studies and interpreting

results? In this lesson, you'll explore how the scientific method can help with the difficult
task of studying behaviors and their potential causes.
We are all capable of speculating about other people's behavior. We do it every day. That
man who took my parking space is clearly arrogant and inconsiderate; that little girl
screaming in the grocery store clearly has bad parents. But as normal as it is for us to make
these kinds of assumptions and explanations, there is nothing reliable or scientific about any
of it. Our spotty observations of little girls in grocery stores can't lead to any larger
conclusion about the relationship between parenting and behavior.

Psychology attempts to apply


the scientific method to the
study of human thought and
behavior in order to reach
conclusions that do have real
explanatory or predictive
power. Psychologists come
up with a hypothesis about
people--that overly indulgent
The method applied to the study of human thought and behavior
parenting leads to temper
tantrums, for example--and then try to design a study that will show it to be right or wrong.
There are many different kinds of studies, but all face the common problem of separating
out the behavior you want to study from all the other kinds of behaviors that might happen
along with it and the potential cause from other things that could cause it.

Let's say you want to design a study


to see whether parents who give into
their child's every wish are likely to
have children that will have tantrums.
You might ask parents to fill out a
questionnaire to determine their
parenting style; you then ask those
who were especially indulgent and
permissive to bring their child in to
participate in a test of their ability to
solve math problems. You tell them they'll get a candy bar as a reward for doing well. You
don't really care about the children's math ability--that's just the pretense for getting them
into your study. What you really care about is their behavior during the test. When they've
finished, you look in your desk and then, in mock surprise, say 'well it looks like we're out of
candy! Let me go ask a colleague if he has any.' You leave the room for about five minutes,
come back still emptyhanded, and apologize for not having any candy.

Though some of the children seem disappointed but okay, many of them start to cry and
demand candy. Once you've taken note of the reaction, you have a friend run in breathlessly
with 'the last candy bar' and the children leave your study munching happily.
You take a look at the numbers; 80% of the children exhibited some kind of 'tantrum'
behavior when they found out they really weren't getting candy. You conclude that, just as
your hypothesis stated, indulgent parents are associated with bratty children.

But if you stopped here, you would


have ignored one of the most
important ideas in scientific testing:
the scientific control. Your study
design does seem to distinguish
between different potential
behavioral reactions to a common
situation: some kids cry, and others
don't. But your study doesn't
distinguish between potential causes; all of these children have indulgent parents, but you
can't be sure that it's indulgent parenting and not some other factor that causes 80% of
them to cry. To help determine this, you'd have to bring in an equal number of children
selected randomly and submit them to the same test; this is your control group.

You find that about 80% of these kids


cry as well when they're denied
candy. This means that your original
conclusion, that indulgent parenting is
associated with bad behavior, is
actually not supported by the results;
all you can really say is that being
denied promised candy makes a
An equal number of random subjects in a scientific test
significant portion of all children cry.
There are two possible explanations for this: one is that parenting style has nothing to do
with behavior. The other, perhaps more likely explanation is simply that your study upsets
children too much to learn anything valuable from it.
Think of some tests you've taken--some are really easy, like a math test in which the only
question was 1+1, and everyone who takes them gets 100%. Getting 100% on the test
doesn't have very much to do with your preparation or your intelligence--it has more to do
with the question being really easy. But if you're given a harder math question, like finding
the integral of 2n, your score means more; if only a few people are getting 100%, this
probably means that they have prepared better than everyone else. Promising kids candy
and then refusing to give it to them is like the really easy test--most of them cry, but that's
probably just because it's a really upsetting situation. If you designed a milder scenario that

didn't cause as many tantrums, it might be able to tell you more about the differences
between children who still did have tantrums and those who didn't.
This parenting style/children's behavior/candy bar test is made-up, but it should give you
some idea of the questions that psychologists have to ask themselves when designing
studies and interpreting the results. The principles of the scientific method are very
important for psychologists, who come up with hypotheses about behavior and then try to
prove them right or wrong. Scientific control is crucial to getting results that sufficiently pull
apart different potential behaviors and causes.

What are the three main research designs, and what are their advantages and
disadvantages? In this lesson, you'll explore the different goals behind descriptive,
correlational and experimental research designs.
Psychological studies begin as questions. 'How does a person with severe brain damage
behave?' 'Do smart parents have smart children?' 'How does reminding someone of their
race or gender change their performance on a test?'

Psychologists turn these questions


into hypotheses: 'Do smart parents
have smart children?' is changed to
'Parents who have high scores on
intelligence tests have children with
similarly high scores.' Then they
design a study to test the hypothesis
in an efficient way that reduces
potential confounds, or factors that
could explain the results but aren't directly measured or addressed by the study. Depending
on the question, and on the hypothesis, psychologists will choose one of three main types of
research designs.

A first type of research design is called descriptive. Descriptive studies aim only to gather
data to present a complete picture of a given subject. Psychologists might use a survey to
assess the state of mental health on
college campuses; the results
wouldn't tell them anything about the
causes of mental illness in college
students, but it would give a complete
picture of the problem. To answer one
of the questions we began with, 'How
does a person with severe brain
damage behave?' psychologists
might use a case study, or a close
examination of one person with a particular problem. Phineas Gage, a railroad construction

foreman in the nineteenth century, is a classic example of such a case study: in an accident
at a railroad construction site, he had a large metal rod driven through his head. He not only
survived but was fully-functioning and lived for another twelve years. But several people
close to him remarked that they noticed his personality had changed, that he'd become
irritable and unable to hold a job. Though Gage's case alone could not prove anything
definitive about his particular brain injury and emotion regulation, it did help psychologists
make better hypotheses about the relationship between these things for future studies.
Descriptive studies often form the basis for later correlational or experimental research.

Correlational studies try to figure


out the relationship between two or
more variables, which could be
anything you can measure like
behavior, age, gender, etc. To
answer the question, 'Do smart
parents have smart children,' two
variables psychologists might
measure are parents' IQ and
children's IQ. Psychologists would then administer IQ tests to a very large number of
families and determine statistically how related the parents' scores were to the children's
scores--this 'relatedness' is known as the 'correlation.' A correlation is represented by a
number called the Pearson correlation coefficient that is between -1 to 1. If two variables
have a correlation of 0, there is no relation between them--for example, something like your
birthday and the color of your hair would likely have a correlation very close to 0 because
these two things have nothing to do with each other. A correlation between 0 and 1 is called
positive, and it means that as the first variable increases, so does the second one. This
turns out to be true in the case of parent and child IQ; one study reports a moderately
positive correlation of .35. A correlation between -1 and 0 indicates a negative correlation,
and means that as the first variable increases, the second decreases. Age and memory
function are likely to be negatively correlated; as age increases, the ability to remember
things clearly tends to decrease.

Correlation is not causation; for example, high parental IQ does not necessarily cause a high child IQ

An important shortcoming of correlational research is the problem of determining causation.


As tempting as it might be to assume so, correlation is not causation. Though there is a
moderate positive correlation between parents' IQ and children's IQ, this does not mean that
a high parental IQ causes a high child's IQ. Though in this particular case it seems unlikely
that the parents' high IQ's are caused by the child's, this is a possibility that cannot be ruled
out by correlational research. It is also possible, even likely, that both high IQ's are caused
by a third external factor, like high income and socioeconomic status.
To determine causation, psychologists must conduct experimental research. This also
involves variables, but in this case they are distinguished as independent and dependent.
Psychologists change the independent variable and look at what happens to the dependent
variable. Some psychologists hypothesized that women and racial minorities might
experience something called stereotype threat when they take tests. Since these groups
are stereotyped as not being as good at math as white men, the anxiety they experience by
worrying that their performance will confirm this negative stereotype actually makes them
end up doing worse. Psychologists have tested this in many situations, using awareness of
race or gender identity as the independent variable and test performance as the dependent
variable. One group administered a difficult section of a standardized test to two groups of
African-American and European-American students. The first group was led to believe that
the test measured intelligence; the second was not. In the first group, there was a wide
performance gap between African-American and European-American students; in the
second group, this gap was greatly reduced. The researchers concluded that worrying about
confirming negative stereotypes about intelligence had actually made the African-American
students perform worse. Similar results have been found for women in chess competitions
and entrepreneurship.

Variables of Experimental Research

Experimental research is powerful


but limited in an important way:
psychologists simply can't
manipulate all the variables they're
interested in. Sometimes it's just
impossible. If psychologists are
interested in the differences between
female and male leadership, they
can't just instate a female leader in a
company formerly led by a man; the

company board might have something to say about that. Sometimes experimental research
is possible but highly unethical; to experimentally determine what kinds of trauma were most
likely to cause post-traumatic stress disorder, for example, psychologists would have to
submit large numbers of people to different kinds of trauma and potentially debilitate them
with PTSD symptoms for the rest of their lives.
Let's quickly go over the three types of research designs once again. Descriptive studies
seek only to document; a case study like Phineas Gage is an example of this.
Correlational studies try to establish a relationship between two variables, like parents' IQ
and children's IQ, though correlation does not equal causation. Finally, experimental
research looks to study causation, like with the example of stereotype threat and test
performance, but can't be used in every situation due to practical and ethical concerns.

How do validity and reliability contribute to study design in psychology? In this lesson,
you'll look at how experiments can fail reliability and validity requirements to get an idea
of the challenges behind conducting significant psychological research.
Designing a psychological study isn't really that hard. If you've ever written a survey or taken
a poll among your friends, you've conducted some crude psychological research. But
designing a study that produces valuable and scientific results is really challenging. If you
gave your friends a survey about their political leanings, they might be influenced by the
way you phrased the questions or by knowing your own political opinions; your survey might
not accurately measure what you think it does. Two key concepts for designing scientific
psychological studies are reliability and validity. We'll look at some examples of both to
better understand the importance of careful research design.

Do you have a friend or


family member who will
always help you out if
you ask? You'd
probably describe this
friend as reliable.
Reliability in
psychological research
isn't really that different
Psychological research depends on reliable tools to measure variables
- it means that your
tools for measuring a given variable measure it accurately and consistently. If you use a
rigid ruler to measure the length of your foot, you should always get the same length; this is
a measurement that has test-retest reliability. But if instead you measure your foot by
holding your fingers about an inch apart and then moving them down the length of your foot
and counting as you go, you'd probably come up with different measurements each time.
Your fingers are not a particularly reliable measurement tool for the length of your foot.

Important to designing and


interpreting psychological research is
the idea of validity. In general,
validity refers to the legitimacy of the
research and its conclusions: has the
researcher actually produced results
that support or refute the hypothesis?
It can be easier to understand validity by looking at some of the ways that research can be
not valid.
A study fails construct validity if what it chooses to measure doesn't actually correspond to
the question it's asking. Let's say you were doing research that required you to know how
intelligent your subjects were. To measure intelligence, you decide to administer a really
difficult physics exam. If you did this, your experiment would lack construct validity because
a score on a physics exam doesn't really measure intelligence; it just measures whether
you've taken physics or not.

A psychologial study fails when it measures something that does not correspond to the question being
asked

Internal validity has to do with confirming that a causal relationship you've found between
your variables is actually real. Even if you think you've found a definite relationship between
changing one variable and observing change in another, you could be inadvertently
changing something else that is actually causing the effect. As an example, let's say you
wanted to test whether certain colors of fonts help people remember information better than
others. You give your subjects two texts, one in green and one in red. The red text is about
celebrity gossip; the green text is about chemistry. You find that your subjects remember the
red text much better and conclude that red font helps memory. But by having two different
texts, one much more easily memorable than the other, you introduced a confound into
your experiment. You don't know whether your effect is caused by the red font or by the
more interesting content.

A third type
of research
validity is
external
validity,
which has
to do with
your
A variable that could damage the validity of a psychological test is called a confound

conclusions applying to more people than just the ones you tested. Though psychologists
might like to test everyone, doing so would be absurdly expensive and time-consuming. So
instead, psychologists take a sample of the population they want to study. This sample
group is usually selected at random, based on who volunteers to participate. Psychologists
try to get bigger groups to control for random variations. Usually the results found in the
sample are assumed to generalize, unless there are compelling reasons to question it: for
example, if a study on attitudes toward aging had only college students for subjects, it might
fail external validity.

In general, reliability and validity are principles related to making sure that your
study is actually testing what you think it is. Reliability makes sure that your test
measures its variables accurately. Validity ensures that your measures and variables are
telling you what you think they should; that your questions assess the right variables, that
your experimental results have no confounds and that your results generalize like you think
they will.

What are the two main types of statistics used by psychologists? In this lesson, you'll
start to see what psychologists need to do to analyze their data and test the significance
of their results.
Once psychologists have carefully chosen a study design appropriate for their subjects,
thought carefully about their variables and measurements, selected a sample group and run
their tests, they're typically faced with a mountain of data. It could be anything from survey
results to maps of brain activity. In order to make the experimental process worthwhile,
psychologists must now find ways to interpret and draw conclusions from their data. They
ultimately want to test whether the data supports or rejects their hypothesis.
In order to do this, psychologists use statistical analysis. They make use of two main types
of statistics: descriptive and inferential. Descriptive statistics help psychologists get a
better understanding of the general trends in their data, while inferential statistics help them
draw conclusions about how their variables relate to one another.

Descriptive statistics are basically


what they sound like: they describe

and summarize a set of data. Descriptive statistics could be things like the average age of
participants or how many were men and women. Your GPA is a descriptive statistic; it
summarizes how you've done in school. These kinds of statistics generally make use of
averages, also known as the central tendency of the data, to summarize the data set.
There are three kinds of averages that you may have learned about in math class: mean,
median, and mode. The mean is what's most commonly associated with average; it's when
you add up a set of numbers and then divide by how many are in the set. Let's say you did a
survey of how many donuts per week your neighbors eat. Only five of your neighbors
respond, giving you a data set that looks like this: {1, 2, 2, 2, 13}. The mean number of
donuts your neighbors eat is (1+2+2+2+13)/5, or four. But since one of your neighbors is an
outlier and eats way more donuts per week than the others combined, the median or mode
might be a better measure of central tendency for this data set. The median of a set is just
the number that divides the set in half if you've ordered it from least to greatest - so in this
case, two, or the number in the middle. The mode is the most frequently repeated number in
the set - in this case, also two. You can remember mode by just replacing the last two
letters--mode is 'most.' Though the mean is often a great tool for measuring central
tendency, in this case two donuts per week is much more realistic than four.

Descriptive statistics also address the


dispersion of a set, or how widely its
elements vary. The standard
deviation and variance are related
measures that can give psychologists
a sense of this for their data. Both tell
psychologists how far from the mean
an individual data point is likely to be.
So while the data set of your donut
survey has an outlier (13), the equations to calculate standard deviation and variance take
the probability of each result into account. Since 13 doesn't have as high of a probability as
two, it doesn't weigh as heavily into calculating how widely responses are likely to vary.

Inferential statistics can be used to draw conclusions from the data that
descriptive statistics describe. Researchers can look at their data and determine
how likely it is that changes in one variable caused changes in another or that two variables
seem to be related to one another. These conclusions can help them determine whether the
data supports or rejects their hypothesis. Let's say you conducted a few other surveys of
your neighbors, attempting to relate donut consumption to weight. You get results back that
seem to confirm your hypothesis that higher donut consumption is associated with higher
weight; the 13 donut per week neighbor is the heaviest of the bunch. But before you
condemn donuts, you need to show that your results have statistical significance. When
psychologists look at data, they perform a variety of statistical tests to confirm that their
correlations aren't just a result of chance. Psychologists have agreed that if a result has a

less than five percent chance of occurring due to chance, it can be called statistically
significant. If results are significant, they can be used to support or reject hypotheses.

Statistical analysis is a complicated


and important part of psychological
research. We've introduced some of
the major concepts and terms:
descriptive statistics that summarize
a data set and inferential statistics
that help researchers draw
conclusions from it. Descriptive
statistics make use of central
tendency, or averages, and measures of dispersion like the standard deviation.
Inferential statistics help to determine the statistical significance of a data set. Only if
results are significant can they be used in support of a hypothesis.

What are the ethical principles of psychological research? In this lesson, you'll take a
look at the careful considerations a psychologist must make with respect to her
participants when she designs a test.
Let's say a psychologist wanted to test whether people who are thirsty do more poorly on
math tests than people who are well-hydrated. She puts out an ad for participants which
says that she's conducting a study of math ability that will take an hour. But when her
participants turn up, she divides them into thirsty and non-thirsty groups. The non-thirsty
people are each given two glasses of water and made to wait in a room for an hour and
then take a twenty minute test. This is a little longer than the psychologist said, but they're
not too upset about it. The thirsty people, though, are forced to stay in a room without water
for five hours before taking a twenty minute test. They're justifiably upset; the psychologist
made them uncomfortably thirsty and kept them for far longer than she said. The
psychologist did not conduct her experiment with adequate ethical standards.
The importance of ethics in psychological research has grown as the field has evolved.
Some of the most famous studies in psychology could not be conducted today because they
would violate ethical standards. Philip Zimbardo designed his Stanford Prison
Experiment to look into the causes of conflict between guards and prisoners. Zimbardo
assigned some college students to play guards and others to play prisoners in a 'prison' set
up in the basement of the Stanford Psychology Building. The experiment quickly got out of
hand--the guards quickly began abusing the prisoners for the sake of order. Zimbardo let
this go on until his girlfriend visited the 'prison' and was shocked at what she found.
Zimbardo's experiment allowed its participants to hurt each other both physically and
psychologically and would not be approved by today's review boards.

Ethical standards in psychological


research are motivated by two main
principles: minimized harm and

He conducted an unethical prison experiment

informed consent. The psychologist studying thirst and test performance failed on both of
these counts; she made her participants unnecessarily uncomfortable and didn't tell them
how long they would really be in the experiment. The experiment would likely not be
approved by her university's Institutional Review Board (IRB). The IRB is in charge of
determining whether the harm done by an experiment is worth its potential value to science
and whether researchers are taking all of the precautions they can to make the research
experience pleasant and informative for participants.

Determines whether the physical or psychological harm caused by research is worth its value to science

Minimized harm and informed consent underlie the entire process of designing and
approving psychological research. When psychologists are designing experiments, they try
to think about the least harmful way to test the hypothesis they're interested in. Harm can be
physical or psychological; deception is considered a form of psychological harm that is
avoided if at all possible. If the psychologist is unable to design the experiment without any
risk of harm, she must give patients a consent form to sign that clearly explains all of the
risks involved in participating in the study. The psychologist conducting the thirst experiment
would have to clearly explain in her consent form that the participants were likely to get
uncomfortably thirsty.

Psychologists who feel they


need to deceive their participants
run into a unique challenge with
regard to consent forms.
Deception is quite common in
psychological research because
it allows researchers to design
situations in which participants
are more likely to act naturally. In
Two principles that motivate ethics in psychological research
another famous unethical
experiment, Stanley Milgram told participants that they were helping him conduct an
experiment about learning. He had an actor in another room play the 'learner,' and told the
participants to administer electric shocks to the learner if he got a question wrong. Milgram's
experiment was actually on obedience - how long would his participants continue to listen
to him and shock the learner? But if he had told them his real goals, it would clearly have
affected their behavior; they would have been far less likely to be obedient if it were put in

He devised a famous unethical learning experiment that involved electrically shocking participants

their minds that this was what Milgram was testing.

There is a genuine need for deception in psychological research, but ethics now require that
it be minimized and that patients are fully informed of the deception in a debriefing session
once the experiment is over. After every experiment, whether or not deception is involved,
researchers will explain to their participants what they were trying to measure and allow the
participants to ask any questions.
A final consideration in psychological research is use of animals in experiments. Some
psychologists, particularly those that study biological aspects of psychology, feel that they
need to conduct experiments on animals. They might want to test a new drug or do brain
research that would be clearly unethical on a human. The American Psychological
Association allows research to be conducted on animals, though they require that
researchers are careful to - as with their human participants - minimize harm and make sure
that the harm they do is worth it for its scientific benefit. Most experiments are also now
conducted on animals like rats, mice and birds - research on primates, like in Harry Harlow's
famous experiment on love in neglected monkeys, is far more restricted.
To sum things up, for the sake of ethics, psychologists are expected to make every effort to
minimize harm and get informed consent from participants. Deception is allowed but
must be minimized, and participants must be informed of it after the experiment is over.
Each research organization's Institutional Review Board oversees the process of
approving research. Animal research is allowed, but researchers must treat the animals with
respect and dignity.

Vous aimerez peut-être aussi