Vous êtes sur la page 1sur 16

LAPORAN PENELITIAN

CHARACTERISTIC OF HIGHER ORDER THINKING SKILLS


ASSESSMENT TO THE ENVIRONMENT LESSON

Oleh :

Kusuma Wardany, S.Pd., M.Pd


0206019101

FAKULTAS SAINS DAN TEKNOLOGI


UNIVERSITAS NAHDLATUL ULAMA LAMPUNG
2018
DAFTAR ISI

Halaman

HALAMAN JUDUL...............................................................................................1

DAFTAR ISI........................................................................................................... 2
ABSTRACT.............................................................................................................. 3
INTRODUCTION................................................................................................ 3
LITERATUR VIEWS........................................................................................... 4
METHODS........................................................................................................... 5
Reliability.................................................................................................... 5
Level of Difficulty...................................................................................... 5
Distinguishing points.................................................................................. 5
RESULTS AND DISCUSSIONS......................................................................... 5
Result ......................................................................................................... 5
Field Trial Result......................................................................................... 6
Discussions................................................................................................. 11
Conclusions.......................................................................................................... 14
Location Overview............................................................................................... 15
References............................................................................................................ 15

2
CHARACTERISTIC OF HIGHER ORDER THINKING SKILLS
ASSESSMENT TO THE ENVIRONMENT LESSON
Kusuma Wardany
Nahdlatul Ulama Lampung University
kusuma.wardany@ymail.com
kusuma.wardany@0619@mail.com

ABSTRACT
This research is aimed to know the characteristic of Higher Order Thinking Skills
assessment. The instrument was arranged based on Development Borg & Gall step. The
arrangement of examination draft validated by qualified validator and teacher. Then,
examination draft tested to few students in small trial. The trial result analysed as
quantitatively used Quest program that used to determine reliability, difficulty level, differ
effort, and distractor effectivity. There are some revisions after small trial, and then field
trial did. Based on field trial, it is known that assessment instrument of Higher Order
Thinking Skills to senior high school students in Surakarta for environment topic, has high
validity and reliability with high and enough interpretation score. Inside of environment
topic, there are questions with difficulty level of 0,68% difficult 0,68%, 9,65% medium,
and 9,6% are easy (multiple choice) 60% difficult, 40% easy (essay), there are some of
them that has differ effort with proportion score 6,89% less, 6,20% enough and 6,89%
(multiple choice), 40% enough, 51,11% good and 8,88% very good (essay). Result show
that instrument of Higher Order Thinking Skillsassessment about environment topic be able
to measure higher thinking skill level of students.

Keywords: Higher Order Thinking Skills, assessment, environment topic

INTRODUCTION

Assessment or evaluation is a general term that covers the entire procedure used to
obtain information about student learning outcomes (observation, ranking, testing using
paper and pencil) and making assessment about the learning process (Gronlund & Linn,
1995). In education, assessment is defined as a procedure used to obtain information to
measure the students' level of knowledge and skills whose results will be used for
evaluation purposes (Reynolds, Livingston, Willson, 2010).
Higher Order Thinking Skill is a thinking skill that not only requires remembering
skills, but also requires other higher skills. Indicators for measuring Higher Order
Thinking Skills are include analysing skills (C4), evaluating (C5), and creating (C6)
(Anderson & Krathwohl, 2001). Higher Order Thinking Skills as thinking skills that
occur when someone takes new information and information that has been stored in his
memory, then connects the information and delivers it in order to achieve the goals or
give the answers needed (Lewis & Smith, 1993).
This is in line with the 21st century community skills characteristics that published by
the Partnership of 21st Century Skill which identifying that students in the 21st century
must be able to develop the competitive skills needed in the 21st century that focusing
on developing Higher Order Thinking Skills, such as: critical thinking (critical thinking),
problem solving (communication solving), communication skills, ICT literacy,

3
information and communication technology (ICT, information and communication
technology), information literacy, and media literacy ) (Basuki & Haryanto, 2012).
The results of High School Observations in Surakarta that were chosen randomly
through Deuteronomy and Exams such as National Exams, Final-term Exams, Middle-
term Exams, School Exams, Daily Tests, as well as from textbooks that teachers and
students use on ecosystem and environmental materials show that the problems and
questions are still in the low cognitive domain (lower order thinking skills). The low
percentage of High Order Thinking skills questions is an indicator of the students' low
cognitive level in school.
Assessments that measure Higher Order Thinking Skills can use subjective and
objective tests form. A subjective test is an essay test. Essay test is a kind of test that
needs description answer using the test-takers own words. In essay form tests, the
students are required to think about and use what is known regarding the questions that
must be answered. Objective test is a kind of test that consist the true-false answer test
(true false), multiple choices, completion, and matching (Suwandi, 2009).
To be able to state that a learning result is good or bad, successful or failed, the data
obtained must also be truly reliable / accurate so that the determination taken is not
wrong. If the data is wrong, then the results of the assessment will be wrong and
consequently the decision will be wrong. Therefore, it is necessary to have a good test
tool so that the data obtained is accurate (Subali, 2010).
Based on this, the problem in this research is how the characteristics of the Higher Order
Thinking Skill assessment in the environmental material that will be tested in high
school students in class XI. The purpose of this research is to find out the characteristics
of Higher Order Thinking Skill assessment on environmental material that will be tested
in high school students of class XI.

LITERATUR VIEWS

According to Lewis & Smith (1993), Higher Order Thinking Skills is a thinking
skill that does not only require ability remember, but need other abilities that are
higher. Higher Order Thinking Skills skills as thinking skills that occur when
someone takes new information and information that has been stored in his memory,
then connects that information and conveys it to achieve the goals or answers
needed.
King, Goodson, Ludwika, and Rohani (2010) say high-level thinking skills in
students can be empowered by giving unusual and erratic problems such as questions
or dilemmas, so that successful application of this ability is when students managed
to explain, decide, show, and produce problem solving in the context of knowledge
and experience.
Heong et. al (2011) states that higher-order thinking is the use of the mind more
broadly to find new challenges. Higher-order thinking skills require someone to
apply new information or prior knowledge and manipulate information to reach
possible answers in new situations. Wardana (2010) suggests that high-level thinking
skills are thought processes that involve mental activities in an effort to explore
complex, reflective and creative experiences that are carried out consciously to
achieve goals, namely acquiring knowledge which includes the level of analytical
thinking, synthesis, and evaluation.

4
METHODS

The instrument was prepared based on the steps of Borg & Gall's Development
Research (1983). Data analysis techniques used are qualitative and quantitative data
analysis. Qualitative analysis is done through a research to determine the validity of the
contents of the test instrument, namely the reliability between the questions in the test
and the indicators that have been prepared previously. Quantitative analysis is carried out
by using the Quest program. Some aspects that are analysed quantitatively are the
reliability, level of difficulty, test-items, distinguishing points and efficiency of distractor.

Reliability
Reliability is often called as the degree of consistency (constancy). When a
measuring instrument has high reliability, it means that even though the assessment done
repeatedly with these measuring instruments, it will produce the same or almost the same
as the required information (Purwanto, 2010). The reliability of the test instruments is
also reviewed from the results of the Quest program analysis.

Level of Difficulty
The difficulty level of a test subject or problem (denoted by P) is the proportion of
all students who answer the test subject or problem correctly. Numbers that indicate the
difficulty and ease of a problem are called the difficulty index (p). The magnitude of the
difficulty index is between 0.00 and 1.00. The problem with the 0.0 index difficulty
shows that the problem is too difficult; whereas index 1.0 shows that the problem is too
easy. The level of difficulty of the test instrument is also viewed from the results of the
Quest program analysis.

Distinguishing points
The distinguishing point is the ability of a question to distinguish between high and
low-ability students in answering correctly in the analysed questions. The distinguishing
point of the test instrument is also viewed from the results of the Quest program analysis.

RESULTS AND DISCUSSIONS

Results
Qualitative analysis was conducted to review the test-items seen from the material,
construction and language aspects so that the validity of the test instrument was
obtained.

Table 1. Results of assessment instruments validation from expert validators on


Environmental material

NO VALIDATION Average Conversi Criteria


(Score %) on
1 Material 100 A Very
Expert Good
2 Evaluation 80,95 B Good
Instrument
s Expert
3 Senior 85 B Goo
Teacher d
4 Practitione 92,73 A Very
r Teacher Good
5
Limited Trial (Small Group Trial)

Table 2. The Conclusion of the of Small Trial Environmental Test Analysis Material
Results

Categories No.Item Total


4,7,10,11,14,19,
22, 23, 26,
28, 30, 31, 32, 33,
Accepted 34, 35, 36, 22 (55%)
37, 38, 39, 40

1,2,3, 5, 6, 8, 9,
12, 13, 15,
Revised 16, 17, 18, 20, 21, 18 (45%)
24, 25, 29

Rejected - -
Total - 40

Field Trial Results


The results of the field trials were conducted on five schools in Surakarta, namely
SMA 2, SMA 3, SMA 4, SMA 6, and SMA 7 Surakarta in the 2015/2016 academic year
with the previous environmental subject in Grade X.

1) Validity

Table 3. Validity of Test-Items in Environmental Materials


Conclusion Interpretation

School Valid Very High Fair Low


Invalid High

SMA 2 32 6 235 3 - -

SMA 3 35 3 34 1 3 -
SMA 3
4 36 2 5 3 - -
SMA
6 38 - 35 2 1 -
SMA 7 38 - 37 1 - -
Total 179 11(5,78) 176 10(5,26%) 4(2,1%) -
(94,2 (92,63%
1%) )

35,8(18,84 2,2(1,15 2(1.05% 0,8(0,42


Average %) %) 23,2 ) %) -
(18,56%
)

6
2) Reliability
Table 4. Reliability Test-Items about Environment
Reliability
School Test Form Value Note

0,
SMA 2 MC 77 High
Essay 0,68 High
0,
SMA 3 MC 44 Fair
Essay 0,74 High
0, Very
SMA 4 MC 81 High
Essay 0,64 High
0,
SMA 6 MC 76 High
Very
Essay 0,82 High
Very
SMA 7 MC 0,81 High
Very
Essay 1,00 High

3) Level of Difficulty

Table 5. Difficulty level of Environment materials (Multiple Choice)

Table 6. Difficulty level of Environment materials (Essay)

7
4) Distinguishing Points

Table 7. Distinguishing Points of Environment Material (Multiple Choice)

8
Table 8.Distinguishing Points of Environment Material (Essay)

5) The Effectiveness of Distractor

Table9.The Effectiveness of Environmental Material Field Test-Items

Table10.Summary of Higher Order Thinking Skills test-items assessment on


environment materials

9
10
Discussions
Before the instrument is tested in the field, the instrument is analyzed qualitatively.
The instrument review is theoretically or descriptively carried out to see the readability
of the instrument and for content validation. Descriptive review of the instrument was
carried out by reviewing material, construction and language aspects. In this study, a
descriptive instrument review was carried out by evaluation instrument experts,
material expert lecturers, linguists and practitioner teachers. The results of the
descriptive study found several questions that were not in accordance with the criteria
so they had to be revised.

1. Limited Trial (Small Group Trial)

After the test instrument was analysed descriptively and validated by the expert
validator, then the instrument was tested on class XI students in Karanganyar 2 SMA.
Quantitative analysis on trial I was done by using MicroCat ITEMAN version 3.00.
The ITEMAN program version 3.00 automatically analyses the level of difficulty,
distinguishing points, the reliability of questions and some other statistical data. The
results of the overall test instrument analysis in test I can be seen in the table below.
Based on Table 2, it can be concluded that in the small groups trials of environmental
material there were 22 items or 55% received, and 18 items or 45% were revised from
total 40 test-items.
Small group trials are a stage of limited trials. Suparman (2012) mentions the
importance of limited trials is to find out how easily students understand the material in
the development assessment instrument, which parts of the assessment instruments are
difficult to understand and the causes, and which test items are not relevant to the
material presented. Data obtained in small group trials (assessment and student
responses) were compiled and analysed to revise the product.

2. Field Trial Results

The results of field trials were conducted on five schools in Surakarta, namely
SMA 2, SMA 3, SMA 4, SMA 6, and SMA 7 Surakarta with Environment as subject
matter.
1) Validity

Validity is the most important requirement in an assessment tool. An assessment


technique can be said to have high validity if the assessment or the test technique can
measure what is actually going to be measured (Sukiman, 2012). The test-items can be
said to be valid if they have great support for the total score (Arikunto, 2007). The item
has high validity if the score on the item has a collateral or correlation with the total
score. The validity test of the test-items is done by using the Quest program.
Table 3 shows the results of the test-items validity test; 38 test-items or about
94,21% of the test items in each school is stated as valid with average 18,84 % and
5,78% is invalidwith an averageof 1,15%. In addition, the interpretation of the test-
itemsthat includes very high category is about 92,63% with 18,52% average,
considering as “high” about 5,26 % with the average 1,05%, and fair about 2,1% with
the average 0,42%.

2) Reliability

Reliability is the determination of an assessment tool. Tests or assessment tests are


said to be reliable if the results are trustworthy, consistent and stable. Reliability testing
is done using the Quest program. Reliability is the determination or constancy of an
evaluation tool (Sudjana 2001). While Singarimbun and Soffian (2008) stated that
reliability is an index that presenting about how far a measuring device can be trusted
and reliable.
Table 4 shows the reliability of the test-items in the environmental material in each
school, namely; in SMA 2 by using multiple choice test-items about 0,77% interpreted
as high, and about 0,68% of the essay form is interpreted as high, in SMA 3, the
multiplechoice form got 0,44% that interpreted as fair, the essay about 0,74%
interpreted as high, SMA 4 multiple choice form is 0.81% got very high interpretation,
essay form is 0.64% high interpretation, SMA 6's multiple choice form got 0.76%
under the high interpretation, and the essay form is 0.82% with very high
interpretation, SMA 7 multiple choice got 0.81% under the very interpreted, and the
essay form is 1.00% under the very high interpretation.

3) Level of Difficulty

The level of difficulty is the opportunity to answer correctly a problem at a certain


level of ability which is usually expressed in the form of an index. The difficulty level
index ranges from 0,000-1,000 (Aiken, 1994). The greater the level of difficulty index
obtained from the results of the calculation, it means that the problem is easier. A
matter of having TK = 0.00 means that there are no students who answer correctly and
if they have TK = 1.00 it means that students answer correctly. Calculation of the
difficulty level index is carried out for each problem number. In principle, the average
score obtained by students on the item in question is called the level of difficulty of the
item (Nitko, 1996).
Table 5 shows the level of difficulty in environment material in the form of
multiple choice questions in each school showing 0.68% stated as difficult with an
average of 0.13%. 89.65% is stated as medium with an average of 17.93%, and 9.6% is
stated to be easy at 9.6% with an average of 1.93%. Table 6 is showing the level of
difficulty in the environmental material in the form of essay questions in each school
shows that 60% is under "difficult" with an average of 12%, and 40% is stated to be
moderate with an average of 8%. The table above does not show the easy difficulty
level in each school.

4) Distinguishing Points

Distinguishing point is the ability of an assessment instrument to distinguish


students who belongs to the clever group (upper group) with students belonging to the
lesser group (lower group). Distinguishing points aims to be able to distinguish
(discriminate) the high-ability testee (clever) with low ability testee (not clever). A
question that has a high D price means that the problem is able to distinguish students
who master the subject matter with students who do not master the material lessons
(Sukiman, 2012).
Table 7 shows the distinguish points in environment material in the form of
multiple choice questions about 6.89% stated as less with an average of 1.37%, 86.20%
stated enough with an average of 17.24%, and 6.89% declared good with an average of
1.37%. In the table above does not indicate that the distinguish points is stated as very
good.
Table 8 shows the distinguish power of environmental material in the form of
essay questions, about 40% stated as enough with an average of 8%, 51.11% stated as
good with an average of 10.22%, and 8.88% stated as very good with average 1.77%.
However, the table above does not show the distinguish points that is stated in each
school.
If a distinguish point of the test-item cannot distinguish the two abilities of the
student, then the possible reasons can be as follows:
1) The answer key of the test-item is incorrect.
2) The item has 2 or more correct answers
3) The measured competence is unclear
4) The distractor does not work
5) The material asked is too difficult, so there are many students guess.
6) Most students who understand the material presented in thee question think that
there is wrong information in the test-items. (Ministry of National Education,
2008).

5) The effectiveness of the Distractor

In multiple choice questions there is an alternative answer / option which is called


as a distractor. The good test-item, the distractor will be chosen equally by the students
who are wrong. On the contrary, if the test-item is not good, the distractor will be
chosen unevenly. The distractor is considered as good if the number of students who
choose the distractor is the same or close to the ideal number. According to Surapranata
(2005), a distractor can be said to function well if at least 5% are selected by the test
participants. Table 9 shows the effectiveness of distractors in environment material in
each school by 61.36% functioning well with an average of 12.26%, and 38.61%
functioning poorly with an average of 8.4%. Thus, the items that have effective
distractors function can be used
According to Aprianto (2008), there are several factors that influence the function
or not of deception, that is, if the problem is too easy, the subject matter gives
instructions to the answer key and the student already knows the material to be asked
too easily.
In table 10 several questions, namely 2, 4, 5 11, 13, 14, 16 and 20 are invalid; it
means that the question does not measure students' abilities. The valid interpretation is
very high in maximum and low in minimum. Has a moderate level of difficulty and a
maximum distinguishing points is enough.
The results of the field test obtained the results that the development assessment
instruments included a good category of questions. Assessment instruments on
environmental material have validity and reliability with very high average
interpretations, and at least enough on environmental material. In the environment
material has a difficulty level with the proportion of 0.68% difficult, 89.65% moderate
and 9.6% easy (multiple choice), 60% difficult and 40% moderate (essay), has a
different power with a proportion of 6.89% less, 86.20% enough, and 6.89% good
(multiple choice), 40% enough, 51.11% good and 8.88% very good (essay).
Higher Order Thinking Skills aspects in the instruments developed consist of 3
indicators according to Anderson, W. L & Krathwohl, R, D. (2001), namely; students
are able to analyse, students are able to evaluate, and students are able to create, and
use 4 aspects of cognitive dimensions which consists of factual, conceptual, procedural
and metacognition. Test items compiled from indicators on these aspects mostly have
questions in the form of statements containing cases or problems that occur in our
environment and everyday life to provide stimulus to students to be critical in solving
the problem
Characteristics of this test instrument with a test instrument developed by Emi
Rofiah (2013) has the same correlation with the HOTS instrument developed, but has a
HOTS indicator whose aspects are different, namely aspects of high-level thinking
skills consisting of critical thinking consisting of 6 indicators, namely students able to
ask questions, revise wrong concepts, plan strategies, evaluate decisions, criticize
statements, and be able to evaluate decisions. Test items compiled from indicators in
this aspect mostly have questions in the form of statements containing problems to
provide stimulus to students to be critical in solving the problem. The aspect of creative
thinking ability consists of 12 indicators, namely students are able to formulate
equations, build relationships between concepts, propose new ideas, compile concepts
in the form of schemes, describe ideas, dare to experiment, organize concepts, produce
something new, design experiments, modify concepts with new things, able to combine
concepts that are coherent, and able to change equations. Test items that test creative
thinking skills test many students to solve questions in the form of pictures and present
problems that can bring students' creativity. The aspect of problem solving ability
consists of 11 indicators, namely students are able to identify problems, declare a
causal relationship, are able to apply concepts that are appropriate to the problem, have
curiosity, able to create charts or pictures to solve a problem, explain some possibilities
as solutions, think open, make decisions, be able to work carefully, dare to speculate
and be able to reflect on the effectiveness of the problem solving process.
Comparison of this test instrument with the test instrument developed by PISA is
very relevant and the same, the questions that must be answered are multiple choice
questions starting from choosing one simple alternative answer, like answering yes / no
question, to an alternative answer that is rather complex, such as responding to several
choices presented. In questions that require a description answer, students are asked to
answer with a short answer in the form of words or phrases, then a quite long answer in
the form of a description that is limited by the number of sentences, and answers in the
form of an open description. Measuring real-life skills related to reading, mathematics
and science with a focus on everyday life and in fields where science is used such as
health, earth and environment, and technology. So that students are not trained to use
logic and analytical abilities.
Based on the analysis and the above statement, it can be concluded that the
instrument of Higher Order Thinking Skills assessment on environment material shows
that the results are able to measure students' high-level thinking skills.

CONCLUSIONS

The conclusions that can be inferred from the results of the research on the
development of instruments for evaluating Higher Order Thinking Skills on ecosystem
and environment material are as follows:
1. The results show that the field test of HOTS's assessment has validity and reliability
with very high average interpretations, and at least enough on environment material.
2. The question on environment material has a difficulty level with the proportion of
0.68% considered as difficult, 89.65% considered moderate and 9.6% considered as
easy ( in multiple choice), while 60% is interpreted as difficult and 40% is
interpreted as moderate (in essay), has a distinguishing points with proportion;
6,89% less, 86.20% enough, and 6.89% good (multiple choice), 40% enough,
51.11% good and 8.88% very good (essay).

The suggestions for further researchers to develop instruments for assessing


Higher Order Thinking Skills are:
1. Subsequent research is expected to conduct multi-location testing of assessments that
have been prepared in this research.
2. Assessment needs to be developed in other forms such as two-tier, three-tier, four-
tier, and other kinds of multiple choices.
Location Overview

1. Surakarta Public High School 2


Surakarta Public High School 2, is one of the country's high schools in Central
Java Province, Indonesia. Same with high school in general in Indonesia during
school education at SMAN 2 Surakarta taken within three years, ranging from
Class X to Class XII.

2. Surakarta Public High School 4


Surakarta State High School 4, known as Smaracatur, is a high school located
in Surakarta City, Central Java, on Jalan Adi Sucipto No. 1, Manahan, Surakarta

3. Surakarta Public High School 5


Surakarta Public High School 5 is one of the State High Schools in Central
Java Province, Indonesia. Same with high school in general in Indonesia during
school education in Senior High School 5 Surakarta taken within three years of
study, ranging from Class X to Class XII.

4. Surakarta Public High School 6


Surakarta Public High School 6 is one of the Public High Schools in Surakarta,
which is located at Jl. Mr. Sartono No 30 Surakarta, Central Java. Same with high
school in general in Indonesia, school education in Surakarta Public High School 6
is taken within three years, starting from Class X to Class XII.

5. Surakarta Public High School 7


Surakarta Public High School 7 is one of the Public High Schools in Surakarta,
which is located at Jl. Mr. Muh. Yamin 79 Kalasan Tipes District Serengan
Surakarta, Central Java. Same with high school in general in Indonesia, school
education in Surakarta Public High School 7 is taken within three years, starting
from Class X to Class XII. This school, also known as Smoephy, is one of the
favorite schools in Surakarta and its surroundings.

REFERENCES

Aiken, Lewis R. (1994). Psychological Testing and Assessment,(Eight Edition)

Anderson, W. L & Krathwohl, R, D. (2001). A Taxonomy for Learning Teaching and


Assesing A Revision of Bloom’s Taxonomy of Educational Objectives. USA:
Addison Wesley Longman.

Aprianto. (2008). Implementasi Sistem Sistem Manajemen Mutu ISO 9001:


2008Berdasarkan Pada 8 (delapan) prinsip manajemen mutupada SMK Negeri
13 Bandung. Universitas Tekhnilogi Bandung: Bandung

Arikunto, Suharsimi. (2007). Dasar-Dasar Evaluasi Pendidikan. Jakarta : Bumi Aksara

Basuki, Ismet, & Hariyanto. (2012). Assesmen Penelitian. Bandung: PT Remaja


Rosdakarya Offset
Borg and Gall. (1983). Educational Research, An Introduction (4 th Ed). Newyork and
London. Longman Inc
Depdiknas. (2008). Panduan Analisis Butir Soal. Jakarta: Dirjen Manajemen

Gronlund, M. S and Linn, Joyce. (1995). Measurement And Assesment In Teaching.


Prentice-Hall, Pearson Education Upper Saddle River, New Jersey

Lewis, A., & Smith, D. (2001). Defining Higher Order Thingking. Theory Into
Practice, XXXII (3), 131-137
Nitko, Anthony J. (1996). Educational Assessment of Students, Second
Edition.Ohio: Merrill an imprint of Prentice Hall Englewood Cliffs.

Partnership for 21st century Skill. (2009). 21st Century Skills Map.
http://science.nsta.org/ps/Final21stCenturyMapScience.pdf. Diakses 1 maret
2015

Purwanto. (2010). Evaluasi Hasil Belajar. Yogyakarta: Pustaka Pelajar

Reynold, Livingston, Willson. (2010). Measurement and Assesment in Education.


Pearson Eucation, Inc., Upper Saddle River, New Jersey 07458. Pearson.

Rofiah, E, Nonoh S. A, dan Elvin Y.E. 2013. Penyusunan Instrumen Tes


Kemampuan Berfikir Tingkat Tinggi Fisika Pada Siswa SMP. Jurnal Pendidikan
Fisika. ISSN: 2338-0691. Surakarta: FKIP Fisika UNS.

Subali, Bambang. (2010). Penilaian, Evaluasi dan Remediasi Pembelajaran Biologi.


UNY: Yogyakarta

Suwandi, Sarwiji. (2009). Model Asesmen Dalam Pembelajaran. Mata Padi Presindo:
Surakarta

Sudjana, Nana. (2010). Penilaian Hasil Proses Belajar Mengajar. Bandung:


Remaja Rosdakarya

Sukiman. 2012. Pengembangan Media Pembelajaran. Yogyakarta: Pendajogja

Sungarimbun, Masri dan Sofian Effendi.(2008). Metode Penelitian Survei. Jakarta:


LP3ES

Suparman,Atwi. (2012). Desain Instruksional Modern. Bandung: Erlangga

Surapranata, S., (2005). Analisis, Validitas, Reliabilitas dan Interpretasi HasilTes.


Implementasi kurikulum 2004. Bandung: Remaja Rosdakarya Offset