Vous êtes sur la page 1sur 8

Assessment Data and Analysis:

This report will discuss the overall success rate of the students in my first period Short
Fiction class at Poudre Community Academy for the 3rd quarter of the 2014-2015 school year.
What will follow will be a detailed analysis of the growth recorded between January 6 and
March 12 of this year. As PCA does not require unit, but rather only quarterly assessment data, it
should be mentioned that the data used for this analysis represents both the first and second unit
covered during the quarter as opposed to only being representative of the unit detailed in my
submitted unit plan. Still, it is my position that due to the nature of the exams from which this
data was taken, it is a legitimate and reliable means by which to understand student learning
trends during the first unit. I must also mention that at PCA the pre-test scores are used purely
for data analysis purposes and have no impact on the students final grades. This is a wellknown fact amongst the student population and as a result it is not untypical for students to do
little more that sign their names to the tests and turn them back in. However, while students may
not put forth their greatest efforts when taking these exams, I will personally attest to the fact that
all students who were present to take the exam did attempt to answer the questions they could
and not a single one was returned completely blank.
It will be noted that the two assessments used for this analysis are not identical. They are
however, parallel with the former being geared towards assessing only pre-existing student
knowledge and understanding, and the latter focused on students ability to transfer and apply the
knowledge and understandings they gained during the unit and quarter through the close reading
and careful analysis of a new (to the students) work of short fiction. The pre-assessment exam
was designed to supply me a gauge of student familiarity with short stories with a focus on the
major elements of form. It consisted of 6 short answer questions worth 5 points each to equal a
total possible score of 30. Questions on the exam tested students on the history of the short story,
its form in contrast to other prose forms (such as the novel), knowledge of major writers of short
stories and their contributions to the form, as well as the purpose for and function of stories in
our lives. With this being the nature of the questions, they did not have decidedly right and
wrong answers on which to base scoring, but instead scoring was done based on the criteria of:
completeness did the answer fully address all parts of the question, thoroughness did the
student supply sufficient information and/or evidence to answer the question and justify their
answer, and lastly understanding does the answer sufficiently demonstrate understanding or is

it simply reiterating ideas. Points were also lost for failure to follow the instructions and respond
in complete sentences bulleted lists were not acceptable.
The post-assessment final exam was of an entirely different design although it functioned
to assess the students on the same information, it simply required that they demonstrate
understanding through the application of skills and knowledge acquired over the course of the
units of study. The final consisted of three parts: short answer, a TWIST analysis, and a final
short essay response. There were a total of 8 short answer questions worth either 3 or 5 points
each depending on the complexity of the question; the TWIST analysis consisted of ten
individual parts (one for each element and corresponding textual evidence) each worth 2 points;
and the short essay consisting of three connected questions which were each worth 3 points.
Totaled, the exam offered 65 possible points. Again points were awarded based on the same
criteria as the pre-assessment completeness, thoroughness, and understanding with deductions
possible for not responding in complete sentences where instructed.
Of the 13 students who began and finished the class, 11 completed both the pre and post
assessments, with 2 students missing one or the other of the assessments. One student missed the
pre-test assessment due to not being enrolled in the class the first day when the test was issued.
The student who missed the post-assessment final was absent due to the death of family member
and was excused from taking the exam. For the purpose of this analysis these two students will
not be included as part of this report. So what remains are measurable scores for both pre- and
post-assessments for 11 students.

First we will examine the scores on the pre-test assessment. The results for the class are
as follows:

Whole Class
Student 1
Student 2
Student 3
Student 4
Student 5
Student 6
Student 7
Student 8
Student 9
Student 10
Student 11
0

10

12

Out of thirty possible points, the average score for the class as a whole was 4.27, with the highest
scoring student receiving 11 points and the lowest receiving a zero. Converted to a traditional
100 point scale, this means that the average for the class was 14% with the highest scoring
student achieving 36.6 %.
If the class is broken down by gender, we see that the class consists of 4 female and 7
male students with scores divided thusly:

Female Students
Student 3
Student 8
Student 9
Student 10
0

10

12

Male Students
Student 1
Student 2
Student 4
Student5
Student6
Student7
Student 11
0

Male Students

If taken together, female students in the class averaged 5.75 points or 19.16%. Male
students averaged only 3.43 points or an 11.43%. These percentages suggest that over the course
of the quarter, it can be expected that female students have the upper hand and will fare better on
the final assessment. It should also be noted that female students on average attempted to answer
more questions on the pre-assessment than did their male classmates which demonstrates to me a
stronger commitment to personal academic achievement. Still it could be contended that this
simply means that the female population of the class was more knowledgeable about the subject
matter at the start of the quarter. However, due to the fact that the students know that these pretests have no effect on their final grades, such statements are merely conjectures and can not be
proven with empirical evidence.

When data for the post-assessment exam is considered, it is readily apparent that students
as a whole took the test far more seriously and also demonstrated a notable increase in scores
suggesting a substantial gain in understanding. The scores are as follows:

Whole Class
Student 1
Student 2
Student 3
Student 4
Student 5
Student 6
Student 7
Student 8
Student 9
Student 10
Student 11
0

10

20

30

40

50

60

70

On average, the students scored 46.45 points out of a possible 65. On a 100 point scale, this
equates to an average of 71.47% - a 57% improvement. Based on this, it can be concluded that
the entire class demonstrated that their understanding of the subject matter increased by an
average of nearly 60%. Individually, the results are even more substantial with the exception of
2 students, one male (student 5), and one female (student 8). The comparison is represented in
the chart below with scores for both assessments converted to a 100 point scale.

Whole Class
Student 1
Student 2
Student 3
Student 4
Student 5
Student 6
Student 7
Student 8
Student 9
Student 10
Student 11
0

20

40

60

80

100

120

To follow up on my prediction that female students would fare better overall and demonstrate the
greatest improvement here is a breakdown of their post assessment scores followed by those of
the male students:

Female Students
Student 3
Student 8
Student 9
Student 10
0

10

20

30

40

50

60

70

Male Students
Student 1
Student 2
Student 4
Student5
Student6
Student7
Student 11
0

10

20

30

40

50

60

70

From these data sets, female students scores on the post-test averaged a 48.5 and male students
averaged a score of 45.3 confirming, despite one significantly low score by student number 8
(who rushed through the post-assessment leaving much blank indicating to me she put no effort
into it), that my prediction was indeed right the female students did do better than the male
students. While this data does support the notion that female students are generally stronger
academic performers than male students, it must be noted that the two numbers are not
significantly far apart.
From this data I am able to claim with confidence that the instructional strategies used
throughout the unit as well as the quarter were successful in developing student understanding
and increasing their knowledge base. For the setting of an alternative high school, with a
population of students who have previously struggled with academic success this, I feel,
represents a substantial accomplishment for both the students and myself as their teacher. The
material covered over the two units these tests represent was decidedly challenging and consisted
of upper-, near college-level concepts. Furthermore, the data supports that the fact that these
students are of equal ability in regards to critical thinking and academic excellence as another
students in traditional school settings.
While I attest that this data is strong and that the pre- and post-assessments it represents
were adequately designed to measure student learning and understanding, I do find one major
issue with this being this data and these assessments do not track development regularly over
time. Due to this, it is quite possible that some areas of learning and understanding have been
overlooked and lost in a muddle of averages. For a set of data to be truly representative of the
growth and development of a student population, it very much should include numerous pieces

of data collected regularly, at key intervals over the course of the whole unit or quarter. From
these two isolated sets of scores, any conclusions I draw simply stand as broad generalizations
that while they speak well of end results, they tell me nothing about the individuals and how
they developed throughout the quarter it does not indicate times where they struggled or
slipped behind, nor does it highlight times when they excelled. In order to adjust for this in the
future, I will need to develop systematic, regular assessments that will accurately chart student
growth and performance over time and provide me more meaningful data set by which to gauge
both the students as well as the effectiveness of my own instruction.

Vous aimerez peut-être aussi