Vous êtes sur la page 1sur 4

To manipulate the presence/absence of violent stimuli the

researchers used two videos: a game with violence and another


with no violence.

Positive remarks
Firstly, the participants had no experience with playing these
video games before. Additionally, they were randomly divided
into two different groups: the experimental group that was
assigned to play the violent video and the control group
assigned to play the non-violent video game.
Secondly, the researchers used a repeated measures procedure
to check the long-term effects of playing a violent video game.
The participants visited the laboratory before the experiment
for baseline measurements, after the first two weeks for the
first measurements and after the end of the fourth week for the
second measurements.
In experiment 2 the task and the questionnaire were the same
as those used in the two first measurements.
However, the description of instructions given to participants
in the study about the hours a week to play the video, and the
whole duration of the experiment are inconsistent. I believe the
duration was 4 weeks (4X4=16).

Critical remarks
The participants were twenty-two, right-handed volunteers
from local Japanese Universities. This could lead to a high risk
of bias because of the overestimation of the effects of violent
video games, as the results will be affected by the
characteristics of these participants.
In experiment 2 the number of the participants reduced to 18.
Non-response is a source of error in this study because the
responses missing could lead to a systematic over-estimation of
the result.
The measurements were carried out in a laboratory and the
experiments in a specific setting-the participants home-not in
natural settings. This means that the effect of the study only
holds in this specific setting and can lead to bias.


Regarding the external validity of the study the following
remarks could me made.
Firstly, there is a setting threat to external validity due to the
artificiality of the research setting, which means the results
cannot be generalized to other environments or situations. A
replication of the study in more natural settings could help
reduce the threat.
There is also a selection threat due to the overrepresentation of
the right-handed volunteers from local Japanese Universities
that weakens the external validity of this study. What about
left-handed, non-Japanese, non-university players of video
games? Are they represented in the study on equal terms? If
not, then the result is biased.
Again a replication of the study with different elements or the
use of probability sampling could reduce the threat of selection.
However, there is no detailed description of the sampling
procedure in the study so that we could see whether probability
sampling was used.
It seems that the sampling method used is convenience
sampling as the most easily accessible elements were selected.
As a result, the findings can fail to generalize to all Japanese
university students, other university students or generally
people that play video games.
The use of only 22 participants leads to sampling error, which
can be reduced if the sample size increases and the sample
variation is kept low.

Sampling bias is a systematic form of error. Sampling bias is
the difference between sample and population value due to a
systematic under- or overrepresentation of certain elements in
the population. Sampling bias occurs when some elements have
a much smaller or larger chance to be selected than was
intended. Sampling bias can also occur when certain elements
have no chance to be selected at all.
Suppose we want to estimate the proportion of people that will
vote for candidate A in an election. Sampling bias could occur if
participants were recruited on the street by an interviewer
during working hours. This could lead to an
underrepresentation of people who are employed full-time. If
these people would vote for candidate A more often, then we
would systematically underestimate the percentage of votes for
candidate A.
The risk of sampling bias is eliminated, at least in the long run
by using a probability sampling method. With non-probability
sampling, the risk of sampling bias is strong. Sampling bias is
comparable to the systematic error that makes a measurement
instrument less valid, or less accurate.
Non-response is another source of error. Non-response refers
to a lack of response to invitations or the explicit refusal to
participate in a study. Non-response also includes participants
who drop out during the study or participants whose data are
invalid because they did not participate seriously, because
something went wrong or they did not understand or failed to
comply with some aspect of the procedure.
If non-response is random, then you could say that non-
response results in a smaller sample and will thereby slightly
increase the margin of error. But sometimes non- response is
not random. Sometimes specific subgroups in the population
are less likely to participate. If this subgroup has systematically
different values on the property of interest, then non-response
is a source of systematic error.
Suppose people with a lower social economic status are less
likely to participate in polls and also prefer other candidates to
candidate A. In that case we are missing responses of people
that would not vote for A, which could lead to a systematic
overestimation of the percentage of people that will vote for A.
.

Vous aimerez peut-être aussi