Vous êtes sur la page 1sur 5

Introduction

The first article entitled “Peer Assessment of Oral Presentation Skills”. This article is
written by De Grez, Valcke and Berings, which is published on 6 May 2010. The purpose of
this research is to investigate the reliability and validity of peer assessments of oral
presentation skills. While the second article “The Effect of Peer Assessment on Oral
Presentation in an EFL Context” which is written by Saedeh Ahangiri, Behzad Rassekh-
Alqol, and Leila Ali Akbari Hamed, published on 1 May 2013.

The objective of this article is to examine the effect of peer assessment on oral
presentation of Iranian English non-major students. To sum up, the first article is about the
correlation between presentation performance and self-efficacy in assessing oral presentation
whereas the second article is about the possibility of peer assessment in improving oral
presentation in an EFL context. Both articles have a similar issue that is about peer
assessment on oral presentation.

Methodology

Based on the first article, questionnaires were used as a method for data gathering
from 95 students who will assess the oral presentation of their peers. The questionnaire is
based on nine criteria where three are content related criteria, five delivery related criteria and
one overall criterion. Most of the participants took both roles as an assessee and assessor. A
one-way analysis of variance (ANOVA) is used to investigate the structure and reliability of
the two components of presentation skills, which is content and delivery. Two scales are
constructed which are self-efficacy related to content and also delivery in order to understand
the relationship between self-efficacy and “given” and “received” peer assessment scores.

In the second article, two groups of a control group and an experimental group were
set up, each consisting of 26 students. In the control group, the participant is separated into
two groups and given two topics each week for their presentation. The student’s presentations
are assessed by the researcher based on the teacher’s assessment questionnaire. In the
experimental group, the participants are divided into a group of five or six which are also
given two topics for their presentation. From week two to six, the peer assessments are
guided by the researcher and during this time the experimental groups practice their peer
assessment. In week seven to fourteen, the peer assessments are done entirely by the students
and are filled in the questionnaires.

From these two articles, we find that both articles use questionnaires as a method to
gather data for their research. However, the approach on how the questionnaires are
implemented is different as we can see in the first article, where peer assessments are done
without any practice or guidance from anyone. In the second article, the control group had
their oral presentation assessment done by the researcher while the experimental group had
practice and guidance from the researcher in a few weeks and later on given the freedom to
do their peer assessment on their own.

In our opinion, the first article’s methodology may have a large number of
participants that improve the credibility of the result but the peer assessment is done based on
the general views among the participants and this may cause the assessment to lead towards
an improper concept of peer assessment. While the second article may have a lower number
participant, but the methodology is much more suitable because the experimental group are
guided and had practice on the proper way of peer assessment and this will help the student in
giving a much more decisive peer assessment.

Therefore, we prefer the methodology in the second article as they are two groups
formed in order to make a comparison whether the peer assessment is biased or unreasonable.
As we all know, when answering questionnaires, some respondents might have a hard time to
decide which answer or scale is more presentable and reasonable, so that is why a little
guidance from the researcher is required to make sure that the students are able to make a
more decisive assessment of their peer’s oral presentation.
Result

For the first article, the result shows that the two subscales of the assessment
instrument, content and delivery are clearly interrelated but have even stronger correlations
with the overall evaluative item called ‘professionalism’. Professionalism is the item that has
the highest score and is correlated to a stronger extent with delivery than content based on a
two-tailed test. The moderate scores of the eta2 by the assessor suggested that the evaluation
results are at least partially biased by assessor characteristic. When we look at self-efficacy
from the point of view of the assessee, there is a significant positive correlation between the
self-efficacy of the assessee and the assessment scores. Students hold a positive view of the
learning process and about the use of peer assessment and reported that they learned a lot.
Finally, the answers to the questionnaire revealed a very positive attitude towards peer
assessment.

For the second article, at first, a t-test was run on TOEFL scores of the student to test
the homogeneity of both groups. From the test, there is no significant difference between the
control and experimental groups with respect to their TOEFL scores. The data recorded in
week six included teacher and peer assessments obtained from the experimental group are
calculated Pearson correlation coefficients. The high correlation between the experimental
group student’s peer assessments and the teacher’s rating shows that the peer assessment was
in high agreement with the teacher’s assessment. This suggests that the students were able to
make judgments of their peers’ oral presentations comparable to those made by the teacher. A
sample t-test was also run on the data obtained from the experimental group in week six to
examine to what extent peer assessment was efficient to enable students to make sound
judgments of their peers’ oral performance. The researchers randomly selected one of the
participants and recorded how other students assessed the same student’s oral presentation.
This test reassured that the students are able to assess their peers reasonably. In order to find
out whether the treatment had an effect on the oral performance of the experimental group, an
independent t-test was carried out on the final scores of both groups. The results indicated
that there was a significant difference between the ratings the students obtained suggesting
that peer assessment had a significantly positive effect on the oral presentation of students
receiving the treatment.
Based on the findings of these two articles, the first article may show a positive
correlation between content and delivery in peer assessment and there is a positive effect in
improving peer assessment and oral presentation but the evaluation results are somewhat
biased based on the assessor characteristic. On the other hand, the guidance of the researcher
in the first six weeks was able to help the students in making a proper peer assessment on
their peer oral assessment. Two groups are formed to allow comparison to be made between
the researcher assessment of the control group and peer assessment in the experimental group
and this help to improve the reliability of the data found. To sum up, the two articles’
findings are different as the focus of the issues are not the same, however, there are
undeniable similarities between them, where both of the findings show the improvement of
peer assessment skills.

In our opinion, the findings in the second article are preferred as the outcomes are
comparable as there are two groups were made in the test. The peer assessment is less likely
to be subjected to an individual as they understand the standard of an assessment. The
explanation on the result given in the second article is also more understandable as compared
to the first one, as the statistical analysis is well portrayed, well organized and is more
comprehensive for the readers. Thus, we prefer the second article.

Conclusion

Based on the first article, it has been concluded that the assessment instrument has
good internal consistency and validity in line with the underlying components of content and
delivery. The reason student perception of the instruction is important, it is reassuring that the
student’s perceptions about the use of peer assessment are very positive. We can conclude
from this study that the psychometric characteristics of the assessment instrument and the
perception of peer assessment justify the use of this rubric in further research and teaching
and learning practice. Assessors with a high self-efficacy level tend to give more extreme
scores and higher scores.
In the second article, the results of this research demonstrate that under certain
circumstances students are able to convincingly assess each other’s oral language ability. It
can be concluded from the results that they illustrated an analogous level of assessment to
that of the teacher. Besides, the students were open in participating in this peer assessment
practice. Consequently, the procedure did not lead to lowering the standards and the students
promoted their understanding of and perception towards assessment by involving in this
study. The teacher and peer assessment’s correlations are together substantially high which
proves the hypothesis that peer assessment enhances learners’ ability to make judgments on
their peers’ oral presentation skills comparable to those of the teacher.

Recommendation

In the first article, it is recommended that future research could also study the learning
effect of peer assessment on consequent oral presentations and see if skills are enhanced by
observing peers. The future studies could build on intervention studies to determine causal
relations. While in the second article, if peers can be engaged in the task of assessment,
teachers’ time could be exploited more efficiently on subjects associated with promoting their
teaching methods.

Vous aimerez peut-être aussi