Académique Documents
Professionnel Documents
Culture Documents
TECHNOLOGY
During the early 1960s, schools in the developed world introduced computers in
the hopes they might assist learning (Kulik & Kulik, 1991). Studies of the effects of
computer-based instruction were conducted at the time, but was not enough consolidation
of the data to know if the integration was successful (Niemiec & Walberg, 1992). Today,
other technologies such as digital projectors, the Internet, websites, podcasts, software,
SMART Boards, iPods, iPads, laptops, and tablets are in classrooms. All of these
technologies are expensive and administrators in charge of schools must decide on
purchases with little evidence of an improvement to learning (Kirkpatrick & Cuban,
1998). In 2009, the United States Congress allocated $650 million for educational
technology (ET) through a program called Enhancing Education Through Technology
(Cheung & Slavin, 2013). By buying ET for schools without knowing exactly how it
improved learning, administrators and policy makers could very well have been wasting
money that could be put to better use elsewhere in the school. The purpose of this essay is
to review 7 different papers written on the effectiveness of ET in a learning environment,
to gain an understanding of how it has been studied to date. There is still a need for a
standardized study that is successful in measuring the effectiveness of ET. Each if the
following papers are summarized and followed by a critique. The critique will include
whether the authors clearly defined relevance, how well the research was done and an
evaluation of the value of their findings.
The first study, The effectiveness of educational technology applications for
enhancing mathematics achievement in K-12 classrooms: A meta-analysis (Cheung &
Slavin, 2013) is the most recent paper on ET and had the most thorough approach of all
articles read. The authors performed a meta-analysis using studies that met high
Walberg (1992) essentially took the average of all 13 reviews and reported that number as
their statistical significance.
Although Niemiec and Walberg (1992) were correct that there was a need for a
compilation of studies, their research methods most likely skewed their final effect size,
making it larger. Other researchers have found that more rigorous studies on the
effectiveness of ET tend to have a lower effect size (Cheung & Slavin, 2013). If Niemiec
and Walberg (1992) had looked at the original studies and calculated significance directly
from them, they might have found a smaller significance and more accurate results.
Finally, since children learn in a much different way than adults, separating the studies
into different age groups, rather than mixing results from college age and elementary
learners, could have been more effective.
The third study is called, New Directions in the Evaluation of the Effectiveness of
Educational Technology (Heineke, Milman, Washington & Blasi, 2001) showcases
authors with a more holistic view of how to study the effects of ET; it was written with
the idea of helping administrators and policy makers decide how to spend money on ET.
The paper is a discussion on the changing field of studying the effectiveness of ET. The
authors began by examining the evolution of the evaluation of social programs and how it
is connected with measuring the effectiveness of ET. It concludes with a list of
recommendations including, using qualitative and quantitative measures, longitudinal
studies that last several years, conducting the studies in the classroom versus a lab,
observing how teachers use the technology, and focusing on more complex factors that
describe the learning process.
The authors establish the relevance of their study by outlining how important ET
has become in the learning environment, and the need to for studies to keep up with the
current practices. They did research by looking at ideas from 17 other articles, and
compiling them, unlike Cheung and Slavin (2013) and Niemiec and Walberg (1992), who
did a statistical analysis of different studies.
This paper outlines some valid points, particularly the problems associated with
studies measuring improvement in specific skills through CBL and CBI. These skills are
simple, such as mathematics (addition or multiplication), and often attained through
drilling. By focusing on the improvement of those skills, researchers do not take into
account other factors that contribute to learning; namely variables such as teachers,
family, critical thinking, and a students environment. The authors also repeatedly
mention the need for qualitative studies, but other researchers have established that
qualitative studies are inferior for determining the effectiveness of ET (Cheung & Slavin,
2013; Niemiec & Walberg,1992). Although this study presents a good analysis of what
needs to be done, it does not provide any concrete solutions. For example, the authors
state that there is a need for studies that encompass qualitative and quantitative data, and
steer away from standardized testing, but they do not explain an alternative of how a
researcher would do that while still showing the ET is effective.
tried to understand the process happening inside the learners brain, rather than how the
technology improved a specific skill. Jenkinson (2009) reviews an experiment that she
had previously conducted on whether animation was more effective as a teaching tool
than still pictures. She notes that there was no statistical significance in the results of her
previous study, but when she read the feedback forms from the participants, there was a
difference in the students perception of how they learned. After re-examining her own
study, she also reviews several other reports that attempt to determine the effectiveness of
multimedia learning. Her results found either no significant difference between standard
teaching and multimedia teaching, or mixed results.
Jenkinson (2009) establishes the relevance of her review by demonstrating a lack
of consistency and low results in previous studies that use a traditional experimental
design model. She points out the need to study multimedia, 3-D and interactive media
because these technologies are already being used in medical schools. Jenkinson (2009)
argues that qualitative data needs to be considered in studies so that the researcher can
gain an understanding of how the learner is thinking, but she also insists that quantitative
data be present to provide validity to the study. This paper takes a detailed look at a few
studies, but does not provide significance statistics, if the author had done this, the review
would be more valid.
Jenkinson (2009) provides an honest assessment of her own original experiment
by criticizing the evaluation techniques of her previous study, our research methods
were not tightly integrated enough (Jenkinson, J., 2009, p. 275). Her evaluation of
interactive technologies and multi-media has real world applications because these are the
technologies that are being installed into classrooms today. Jenkinson (2009) makes some
insightful suggestions such as adopting a flexible evaluative approach and focusing more
on the knowledge process rather than the outcomes, but she does not abandon the use of
statistical significance to determine effectiveness.
The fifth paper reviewed is called, Computers Make Kids Smarter -- Right? by
Kirkpatrick, H & Cuban, L. (1998) is relevant because the authors are skeptical of the
benefits of ET in the classroom, unlike most other studies. It is halfway between a metaanalysis and a review because the authors analyzed more than 80 studies and provided
their opinion on the state of the field, but they did not do any statistical analysis. Their
thesis argues that most people are unaware of effectiveness of ET because studies are
either bias, or poorly conducted. Kirkpatrick and Cuban (1998) divide the reviewed
papers into their respective categories such as meta-analysis or critical review, and then
give a scorecard for each type of study.
This study addresses real issues in the education system today: the cost of
technology and the reasons why ET are in classrooms. They postulate that ET is in
classrooms for the wrong reasons such as advertising and misinformation. Kirkpatrick
and Cuban (1998) seem to have moderately sound research because they are able to
interpret the data in a meaningful way. They even categorize their resources into the
different types of studies. The problem with their paper is that they do not use APA or
MLA style referencing, so it is hard to tell where they got their specific information.
Even though they review over 80 papers, they do not provide any statistical data to back
up their claims, as did Cheung and Slavin (2013).
This study is easy to read and useful to any administrator or policy maker that
needs to make a decision about buying ET for a classroom. The authors provide clear
definitions for anybody who does not know the terms, and the resources are organized in
a way that allows the reader to look up key studies. Finally, the content in the paper
seems believable and realistic, despite no quantitative data.
10
surviving despite evidence of its effectiveness (Cheung & Slavin, 2013; Kirkpatrick &
Cuban, 1998; Niemiec & Walberg, 1992). The research conducted on this paper was very
poor. Molenda (2009) wrote several sections of his paper at a time without citing any
references. His ideas and justifications seem more based on his own opinion than
research. At the beginning of the paper, he stated that he would provide case studies as
evidence, but then only provided links to some case studies in some of the last paragraphs
of the essay. He did not discuss the case studies at all, and he did not provide any kind of
statistical data. Many of the ideas discussed do not make practical sense, or need to be
elaborated. For example, Molenda (2009) stated that student portfolios would be a good
indicator of success of the implementation of ET into post-secondary institutions. How
would a researcher compare something as complicated and abstract as a portfolio? How
could one rule out the influence of outside factors so that the improvement of portfolios
could be attributed to the ET?
The final paper reviewed is called, iPods, iPads, and the SMARTBoard:
Transforming literacy instruction and student learning (Saine, P., 2012). It is an example
of a study with seriously flawed methodology that other researchers have mentioned
(Cheung & Slavin, 2013; Kirkpatrick & Cuban, 1998). This case-based study completely
uses qualitative information to make a point: that ET is changing and improving the way
students learn. The author interviewed four teachers, one Nigerian and three Americans.
Each teacher described a lesson or unit where they used a specific ET to assist in
learning.
Saine (2012) marginally established the relevance of their study because she
attempts to understand newer types of technology, ones that are currently being installed
11
into classrooms. The research on this study was exceptionally poor. The author had a very
small sample group where each teacher described only one lesson or mini-unit that
utilized technology. Interestingly, one of the teachers was Nigerian, but the author did not
discuss the ramifications of studying culturally different groups.
Saine (2012) did not interview a proper cross section of teachers; she could have
selected classes from various different Socio Economic Status (SES) levels in one
country, or equal amounts of teachers from two different countries. The author also did
not interview any of the students to gain insights on how technology helped them, or on
the learning process as Jenkinson (2009) suggested. The study did not contain any
controls, pre-tests or post-tests, yet the author claimed the ET to be successful. Instead,
Saine (2012) relied on the teachers opinion of how the ET helped.
Overall, all students were able to retain facts about
their self-selected country over a long period of time because
they continued to revisit, talk about and listen to the podcast
they made. Usually students quickly forget about the content of
previous projects once we move on to a different topic or
content. But not this time! (Saine, P., 2012, p. 75).
If something is claimed as a success, the author needs to provide some kind of
evidence. Using a teachers opinion does not constitute evidence. At the very least, there
has to be some kind of assessment of the students performance before and after the
technology is applied. By not interviewing the students, the researcher missed some
crucial information, and subjected the study to the teachers biases.
12
13
References
Cheung, A. C. K. & Slavin, R. E. (2013). The effectiveness of educational technology
applications for enhancing mathematics achievement in K-12 classrooms: A metaanalysis. Educational Research Review, 9, 88-113.
doi:10.1016/j.edurev.2013.01.001
Jenkinson, J. (2009). Measuring the Effectiveness of Educational Technology: what are
we Attempting to Measure? Electronic Journal of e-Learning, 7, 273-280.
Retrieved from www.ejel.org
Kirkpatrick, H & Cuban, L. (1998). Computers Make Kids Smarter -- Right? Technos
Quarterly, 7, 26-31. Retrieved from http://www.ait.net/technos/tq_07/2cuban.php
Kulik, C. C. & Kulik, J. A. (1991). Effectiveness of Computer-Based Instruction: An
Updated Analysis. Computers in Human Behavior, 7, 75-94. doi:10.1016/07475632(91)90030-5
Heinecke, W. F., Milman, N. B., Washington, L. A., & Blasi, L. (2001). New Directions
in the Evaluation of the Effectiveness of Educational Technology. Computers in the
Schools, 18:2-3, 97-110. doi:10.1300/J025v18n02_07
Molenda, M. (2009). Instructional Technology must contribute to productivity. Journal of
Computing in Higher Education, 21, 80-94. doi:10.1007/s12528-009-9012-9
Niemiec, R. P. & Walberg, H. J. (1992). The Effects Of Computers On Learning.
International Journal of Educational Research, 17, 99-108. doi: 10.1016/08830355(92)90045-8
14
Saine, P. (2012). iPods, iPads, and the SMARTBoard: Transforming literacy instruction
and student learning. The New England Reading Association Journal, 47, 74-79.
Retrieved from http://www.nereading.org/