Vous êtes sur la page 1sur 14

Running Head: MEASURING THE EFFECTIVENESS OF EDUCATIONAL

TECHNOLOGY

Measuring the Effectiveness of Educational Technology: A Critical Review Essay


Stephen Lerch
75310052
ETEC 511 64A
UBC
Dr. Stephen Petrina
November 30th, 2013

MEASURING THE EFFECTIVENESS OF EDUCATIONAL TECHNOLOGY

During the early 1960s, schools in the developed world introduced computers in
the hopes they might assist learning (Kulik & Kulik, 1991). Studies of the effects of
computer-based instruction were conducted at the time, but was not enough consolidation
of the data to know if the integration was successful (Niemiec & Walberg, 1992). Today,
other technologies such as digital projectors, the Internet, websites, podcasts, software,
SMART Boards, iPods, iPads, laptops, and tablets are in classrooms. All of these
technologies are expensive and administrators in charge of schools must decide on
purchases with little evidence of an improvement to learning (Kirkpatrick & Cuban,
1998). In 2009, the United States Congress allocated $650 million for educational
technology (ET) through a program called Enhancing Education Through Technology
(Cheung & Slavin, 2013). By buying ET for schools without knowing exactly how it
improved learning, administrators and policy makers could very well have been wasting
money that could be put to better use elsewhere in the school. The purpose of this essay is
to review 7 different papers written on the effectiveness of ET in a learning environment,
to gain an understanding of how it has been studied to date. There is still a need for a
standardized study that is successful in measuring the effectiveness of ET. Each if the
following papers are summarized and followed by a critique. The critique will include
whether the authors clearly defined relevance, how well the research was done and an
evaluation of the value of their findings.
The first study, The effectiveness of educational technology applications for
enhancing mathematics achievement in K-12 classrooms: A meta-analysis (Cheung &
Slavin, 2013) is the most recent paper on ET and had the most thorough approach of all
articles read. The authors performed a meta-analysis using studies that met high

MEASURING THE EFFECTIVENESS OF EDUCATIONAL TECHNOLOGY

methodological standards and focused on mathematics. They acknowledged that many


previous studies suffered from serious problems such as a lack of control a group,
limited evidence of initial equivalence between the treatment and control group, large
pre-test differences, or questionable outcome measures (Cheung & Slavin, 2013, p.92),
so they eliminated those studies from their analysis. The authors calculated the
differences between experimental and control groups, by creating an effect size for each
study. They fed those numbers into meta-analysis software to create an overall effect size,
which was (ES = +0.15) for an improvement of learning using ET over traditional
methods of teaching (Cheung & Slavin, 2013).
Cheung and Slavin (2013) established relevance easily by clearly explaining the
reasons for the study. They observed the amount of money spent on ET and pointed out
that the flaws of various other meta-analysis led to the need for a more rigorous analysis.
The research done by Cheung and Slavin (2013) was very sound: they clearly defined ET,
they had strict criteria for study inclusion, their reasoning was appropriate, their
calculation methods were thorough and well explained, and their results were clear. It was
an excellent study.
One of the criticisms of many studies on ET is that they do not have focus
(Kirkpatrick & Cuban, 1998), but by analyzing the effects of ET in one subject, Cheung
and Slavin (2013) managed to create this focus. They picked mathematics which is a less
abstract subject than others, making it easier to measure student progress. The defining
feature of this study was that it was completely comprehensive. Not only was the research
excellent, but they also took the care to explain their reasoning, the results, the math
behind their calculations, and the limitations of their study.

MEASURING THE EFFECTIVENESS OF EDUCATIONAL TECHNOLOGY

I chose, The Effects Of Computers On Learning (Niemiec & Walberg, 1992) as my


second paper because it is another meta-analysis that compiles the results of many
studies, but the results of this study vary from what Cheung and Slavin (2013) found.
Niemiec and Walberg (1992) compiled and calculated the results of thirteen reviews
which reviewed the results of 250 individual studies of the effectiveness of computer
based instruction (CBI) and computer based learning (CBL). They found a much larger
effect than Cheung and Slavin (2013) at (+0.42) standard deviations.
Niemiec and Walberg (1992) established the relevance of their study by arguing
that there are so many studies on the effectiveness of CBI that it was time for an analysis
of the data. CBI and CBL are also technologies that are in many schools around the world
and thus need to be studied. Niemiec and Walbergs (1992) research was not as
comprehensive as that of Cheung and Slavin (2013). The statistical significance in this
study was much higher for several possible reasons. First, they did a review of reviews.
Cheng and Slavin (2013) conducted a direct review of 74 studies where Niemiec and
Walberg (1992) reviewed 13 reviews of between 4 and 101 studies. By not directly
reviewing the actual studies, Niemiec and Walberg (1992) have allowed the bias and
mistakes of the original reviewers to become amplified in their own results. Second, the
different reviews used dissimilar methods to determine their statistical significance. Two
of those reviews used a method called vote counting which is essentially the tallying of
the results of the studies (Niemiec and Walberg, 1992, p.101). Vote counting has been
deemed ineffective by many researchers (Cheung & Slavin, 2013). The rest of the metaanalysis used different methods of evaluation to produce their results. Niemiec and

MEASURING THE EFFECTIVENESS OF EDUCATIONAL TECHNOLOGY

Walberg (1992) essentially took the average of all 13 reviews and reported that number as
their statistical significance.
Although Niemiec and Walberg (1992) were correct that there was a need for a
compilation of studies, their research methods most likely skewed their final effect size,
making it larger. Other researchers have found that more rigorous studies on the
effectiveness of ET tend to have a lower effect size (Cheung & Slavin, 2013). If Niemiec
and Walberg (1992) had looked at the original studies and calculated significance directly
from them, they might have found a smaller significance and more accurate results.
Finally, since children learn in a much different way than adults, separating the studies
into different age groups, rather than mixing results from college age and elementary
learners, could have been more effective.

The third study is called, New Directions in the Evaluation of the Effectiveness of
Educational Technology (Heineke, Milman, Washington & Blasi, 2001) showcases
authors with a more holistic view of how to study the effects of ET; it was written with
the idea of helping administrators and policy makers decide how to spend money on ET.
The paper is a discussion on the changing field of studying the effectiveness of ET. The
authors began by examining the evolution of the evaluation of social programs and how it
is connected with measuring the effectiveness of ET. It concludes with a list of
recommendations including, using qualitative and quantitative measures, longitudinal
studies that last several years, conducting the studies in the classroom versus a lab,
observing how teachers use the technology, and focusing on more complex factors that
describe the learning process.

MEASURING THE EFFECTIVENESS OF EDUCATIONAL TECHNOLOGY

The authors establish the relevance of their study by outlining how important ET
has become in the learning environment, and the need to for studies to keep up with the
current practices. They did research by looking at ideas from 17 other articles, and
compiling them, unlike Cheung and Slavin (2013) and Niemiec and Walberg (1992), who
did a statistical analysis of different studies.
This paper outlines some valid points, particularly the problems associated with
studies measuring improvement in specific skills through CBL and CBI. These skills are
simple, such as mathematics (addition or multiplication), and often attained through
drilling. By focusing on the improvement of those skills, researchers do not take into
account other factors that contribute to learning; namely variables such as teachers,
family, critical thinking, and a students environment. The authors also repeatedly
mention the need for qualitative studies, but other researchers have established that
qualitative studies are inferior for determining the effectiveness of ET (Cheung & Slavin,
2013; Niemiec & Walberg,1992). Although this study presents a good analysis of what
needs to be done, it does not provide any concrete solutions. For example, the authors
state that there is a need for studies that encompass qualitative and quantitative data, and
steer away from standardized testing, but they do not explain an alternative of how a
researcher would do that while still showing the ET is effective.

The fourth paper, Measuring the Effectiveness of Educational Technology: what


are we Attempting to Measure? (Jenkinson, J., 2009) was an attempt to understand the
more complex aspects of learning with ET, particularly the effects of virtual 3-D
simulations, multi-media and interactive media. The value of this study is that the author

MEASURING THE EFFECTIVENESS OF EDUCATIONAL TECHNOLOGY

tried to understand the process happening inside the learners brain, rather than how the
technology improved a specific skill. Jenkinson (2009) reviews an experiment that she
had previously conducted on whether animation was more effective as a teaching tool
than still pictures. She notes that there was no statistical significance in the results of her
previous study, but when she read the feedback forms from the participants, there was a
difference in the students perception of how they learned. After re-examining her own
study, she also reviews several other reports that attempt to determine the effectiveness of
multimedia learning. Her results found either no significant difference between standard
teaching and multimedia teaching, or mixed results.
Jenkinson (2009) establishes the relevance of her review by demonstrating a lack
of consistency and low results in previous studies that use a traditional experimental
design model. She points out the need to study multimedia, 3-D and interactive media
because these technologies are already being used in medical schools. Jenkinson (2009)
argues that qualitative data needs to be considered in studies so that the researcher can
gain an understanding of how the learner is thinking, but she also insists that quantitative
data be present to provide validity to the study. This paper takes a detailed look at a few
studies, but does not provide significance statistics, if the author had done this, the review
would be more valid.
Jenkinson (2009) provides an honest assessment of her own original experiment
by criticizing the evaluation techniques of her previous study, our research methods
were not tightly integrated enough (Jenkinson, J., 2009, p. 275). Her evaluation of
interactive technologies and multi-media has real world applications because these are the
technologies that are being installed into classrooms today. Jenkinson (2009) makes some

MEASURING THE EFFECTIVENESS OF EDUCATIONAL TECHNOLOGY

insightful suggestions such as adopting a flexible evaluative approach and focusing more
on the knowledge process rather than the outcomes, but she does not abandon the use of
statistical significance to determine effectiveness.

The fifth paper reviewed is called, Computers Make Kids Smarter -- Right? by
Kirkpatrick, H & Cuban, L. (1998) is relevant because the authors are skeptical of the
benefits of ET in the classroom, unlike most other studies. It is halfway between a metaanalysis and a review because the authors analyzed more than 80 studies and provided
their opinion on the state of the field, but they did not do any statistical analysis. Their
thesis argues that most people are unaware of effectiveness of ET because studies are
either bias, or poorly conducted. Kirkpatrick and Cuban (1998) divide the reviewed
papers into their respective categories such as meta-analysis or critical review, and then
give a scorecard for each type of study.
This study addresses real issues in the education system today: the cost of
technology and the reasons why ET are in classrooms. They postulate that ET is in
classrooms for the wrong reasons such as advertising and misinformation. Kirkpatrick
and Cuban (1998) seem to have moderately sound research because they are able to
interpret the data in a meaningful way. They even categorize their resources into the
different types of studies. The problem with their paper is that they do not use APA or
MLA style referencing, so it is hard to tell where they got their specific information.
Even though they review over 80 papers, they do not provide any statistical data to back
up their claims, as did Cheung and Slavin (2013).

MEASURING THE EFFECTIVENESS OF EDUCATIONAL TECHNOLOGY

This study is easy to read and useful to any administrator or policy maker that
needs to make a decision about buying ET for a classroom. The authors provide clear
definitions for anybody who does not know the terms, and the resources are organized in
a way that allows the reader to look up key studies. Finally, the content in the paper
seems believable and realistic, despite no quantitative data.

Instructional Technology must contribute to productivity (Molenda, M., 2009) is


a discussion about the survivability of ET. This case study provides some insight into the
need for ET, and focused on technology use in post-secondary institutions, something that
none of the other papers reviewed do. The authors thesis argues that if instructional
technology is to survive, it must contribute to the improvement of academic productivity,
defined as having a successful cost benefit ratio. In his abstract the author stated that he
would provide documented cases demonstrating how the application of ET has led to
increased productivity. Molenda (2009) spent most of his paper discussing different
aspects of ET in higher education including how important ET is, the fact that there needs
to be high productivity, effectiveness and efficiency, how to select ET projects and how to
design them. On the last page of his paper, he provided some links, one is broken, one is
the homepage for the Math Emporium at Virginia Tech and one is the homepage for the
National Center for Academic Transformation. None of the links lead to specific case
studies that the author promised to deliver.
Molenda (2009) did not establish the relevance of his study very well. He stated
that technology needs to survive in an educational environment, but he did not explain
why it would survive. In fact, from most of the studies read, it appears that technology is

MEASURING THE EFFECTIVENESS OF EDUCATIONAL TECHNOLOGY

10

surviving despite evidence of its effectiveness (Cheung & Slavin, 2013; Kirkpatrick &
Cuban, 1998; Niemiec & Walberg, 1992). The research conducted on this paper was very
poor. Molenda (2009) wrote several sections of his paper at a time without citing any
references. His ideas and justifications seem more based on his own opinion than
research. At the beginning of the paper, he stated that he would provide case studies as
evidence, but then only provided links to some case studies in some of the last paragraphs
of the essay. He did not discuss the case studies at all, and he did not provide any kind of
statistical data. Many of the ideas discussed do not make practical sense, or need to be
elaborated. For example, Molenda (2009) stated that student portfolios would be a good
indicator of success of the implementation of ET into post-secondary institutions. How
would a researcher compare something as complicated and abstract as a portfolio? How
could one rule out the influence of outside factors so that the improvement of portfolios
could be attributed to the ET?
The final paper reviewed is called, iPods, iPads, and the SMARTBoard:
Transforming literacy instruction and student learning (Saine, P., 2012). It is an example
of a study with seriously flawed methodology that other researchers have mentioned
(Cheung & Slavin, 2013; Kirkpatrick & Cuban, 1998). This case-based study completely
uses qualitative information to make a point: that ET is changing and improving the way
students learn. The author interviewed four teachers, one Nigerian and three Americans.
Each teacher described a lesson or unit where they used a specific ET to assist in
learning.
Saine (2012) marginally established the relevance of their study because she
attempts to understand newer types of technology, ones that are currently being installed

MEASURING THE EFFECTIVENESS OF EDUCATIONAL TECHNOLOGY

11

into classrooms. The research on this study was exceptionally poor. The author had a very
small sample group where each teacher described only one lesson or mini-unit that
utilized technology. Interestingly, one of the teachers was Nigerian, but the author did not
discuss the ramifications of studying culturally different groups.
Saine (2012) did not interview a proper cross section of teachers; she could have
selected classes from various different Socio Economic Status (SES) levels in one
country, or equal amounts of teachers from two different countries. The author also did
not interview any of the students to gain insights on how technology helped them, or on
the learning process as Jenkinson (2009) suggested. The study did not contain any
controls, pre-tests or post-tests, yet the author claimed the ET to be successful. Instead,
Saine (2012) relied on the teachers opinion of how the ET helped.
Overall, all students were able to retain facts about
their self-selected country over a long period of time because
they continued to revisit, talk about and listen to the podcast
they made. Usually students quickly forget about the content of
previous projects once we move on to a different topic or
content. But not this time! (Saine, P., 2012, p. 75).
If something is claimed as a success, the author needs to provide some kind of
evidence. Using a teachers opinion does not constitute evidence. At the very least, there
has to be some kind of assessment of the students performance before and after the
technology is applied. By not interviewing the students, the researcher missed some
crucial information, and subjected the study to the teachers biases.

MEASURING THE EFFECTIVENESS OF EDUCATIONAL TECHNOLOGY

12

After reading a cross-section of different types of studies, it is clear that there is


very little consistency of how to measure the effectiveness of ET. The most
methodologically sound study was by Cheung and Slavin (2013), but they only studied
the effects of ET on specific, drillable skills in mathematics. Although many authors
called for more qualitative based studies, (Jenkinson, J., 2009; Heineke, Milman,
Washington & Blasi, 2001; Molenda, M., 2009), they failed to find a way to demonstrate
how qualitative analysis could show the effectiveness of ET. If researchers could conduct
studies to the standard that Cheung and Slavin (2013) established quantitatively, but
apply them to a broader range of subjects, adding a small factor of qualitative analysis
such as a questionnaire at the end of the study, then there would be more success in
determining the effectiveness of ET in the classroom.

MEASURING THE EFFECTIVENESS OF EDUCATIONAL TECHNOLOGY

13

References
Cheung, A. C. K. & Slavin, R. E. (2013). The effectiveness of educational technology
applications for enhancing mathematics achievement in K-12 classrooms: A metaanalysis. Educational Research Review, 9, 88-113.
doi:10.1016/j.edurev.2013.01.001
Jenkinson, J. (2009). Measuring the Effectiveness of Educational Technology: what are
we Attempting to Measure? Electronic Journal of e-Learning, 7, 273-280.
Retrieved from www.ejel.org
Kirkpatrick, H & Cuban, L. (1998). Computers Make Kids Smarter -- Right? Technos
Quarterly, 7, 26-31. Retrieved from http://www.ait.net/technos/tq_07/2cuban.php
Kulik, C. C. & Kulik, J. A. (1991). Effectiveness of Computer-Based Instruction: An
Updated Analysis. Computers in Human Behavior, 7, 75-94. doi:10.1016/07475632(91)90030-5
Heinecke, W. F., Milman, N. B., Washington, L. A., & Blasi, L. (2001). New Directions
in the Evaluation of the Effectiveness of Educational Technology. Computers in the
Schools, 18:2-3, 97-110. doi:10.1300/J025v18n02_07
Molenda, M. (2009). Instructional Technology must contribute to productivity. Journal of
Computing in Higher Education, 21, 80-94. doi:10.1007/s12528-009-9012-9
Niemiec, R. P. & Walberg, H. J. (1992). The Effects Of Computers On Learning.
International Journal of Educational Research, 17, 99-108. doi: 10.1016/08830355(92)90045-8

MEASURING THE EFFECTIVENESS OF EDUCATIONAL TECHNOLOGY

14

Saine, P. (2012). iPods, iPads, and the SMARTBoard: Transforming literacy instruction
and student learning. The New England Reading Association Journal, 47, 74-79.
Retrieved from http://www.nereading.org/

Vous aimerez peut-être aussi