Vous êtes sur la page 1sur 2

Assessing Interactive Oral Skills in EFL Contexts

Jason Beale
15
4.2 Bias for best
Testing language skills requires getting a representative sample of optimum performance. To 'biasfor
best' means to elicit a candidate's best performance on a test. A poorly designed or deliveredtest will
not provide consistent results. This may be because confusing instructions favour somestudents over
others, or perhaps because role play situations require specific knowledge orvocabulary that only
some of the candidates possess. Also, generally distracting or stressfulconditions of assessment will
clearly disadvantage some students over others in a way that isunrelated to language ability.
4.3 Marking
Applying descriptive assessment criteria to a candidate's oral performance requires makingsubjective
(or impressionistic) judgements. This is in contrast to objective marking, in which aquantitative
marking scheme is mechanically applied to structured tasks, such as multiple choiceand sentence
completion exercises.A descriptive scale of oral performance, with clearly defined levels, can be
combined withquantitative grades. Subjective judgements matching performance to such descriptors
will thengenerate a quantitative grade score useful for ranking candidates. Analytic rating scales,
thatdescribe specific language skills (see 2.5 above), can be graded differently to emphasize
therelative importance of different skills. This is called 'weighting' the assessment criteria, and needsto
be based on a clear understanding of the stages of language development (
construct validity
)and the purpose of the assessment instrument (
systemic validity
). A graded analytic scale can thenbe combined with a global scale, for example as shown by McClean
(1995) in her description of a negotiated grading scheme at a Japanese university.Grading is very
much dependent on the purpose of the test and the way this is reflected in thecriteria. An achievement
test that is
criterion referenced
will judge candidates individually on
Assessing Interactive Oral Skills in EFL Contexts

Jason Beale
16
their achievement of learning outcomes. Score distribution depends solely on learning success,and it
is theoretically possible for all candidates to receive 100%. On the other hand, a test forselection
purposes will need to separate candidates, making fine distinctions between theirperformances. This
kind of comparative assessment is called
norm referenced
, and the scores areideally distributed on a bell-shaped curve, so that most candidates are placed at the
centre of thedistribution.
Conclusion
An effective test of interactive oral skills is not a haphazard selection of tasks chosen at
random.Instead each assessment situation presents a set of practical demands that need to be
specificallyaddressed. The principles of validity, reliability, practicality and bias for best provide
basicguidelines for evaluating the effectiveness of a test instrument.A theoretical model of oral skills
is also necessary to structure what is fundamentally fleeting andchangeable. At the same it needs to be
remembered that human skills are highly dependent on avariety of internal and external factors that
are independent of language ability per se. The art of testing involves minimising the influence of

such extraneous factors and creating conditionsunder which all candidates can display their genuine
abilities.
Assessing Interactive Oral Skills in EFL Contexts

Jason Beale
17
Bibliography
Canale, M. and M. Swain. 1980. Theoretical bases of communicative approaches to secondlanguage
teaching and testing.
Applied Linguistics
(1): 1-47.Clankie, S. 1995. The SPEAK test of oral proficiency: A case study of incoming freshmen.
In
JALT Applied Materials: Language Testing in Japan.
eds. J. D. Brown and S. O. Yamashita,119-125. Tokyo: The Japan Association for Language
Teaching.Kent, H. 1998.
The Australian Oxford Mini Dictionary
.2
nd
ed. Melbourne: Oxford UniversityPress.McClean, J. 1995 Negotiating a spoken-English scheme with
Japanese university students. In
JALT Applied Materials: Language Testing in Japan.
eds. J. D. Brown and S. O. Yamashita,119-125. Tokyo: The Japan Association for Language
Teaching.Nagata, H. 1995. Testing oral ability: ILR and ACTFL oral proficiency interviews. In
JALT Applied Materials: Language Testing in Japan.
eds. J. D. Brown and S. O. Yamashita, 119-125. Tokyo: The Japan Association for Language
Teaching.Nakamura, Y. 1995. Making speaking tests valid: Practical considerations in a classroom
setting.In
JALT Applied Materials: Language Testing in Japan.
eds. J. D. Brown and S. O.Yamashita, 119-125. Tokyo: The Japan Association for Language
Teaching.Turner, J. 1998. Assessing speaking.
Annual Review of Applied Linguistics
18: 192-207.Underhill, N. 1987.
Testing Spoken Language: A Handbook of Oral Testing Techniques.
Cambridge: Cambridge University Press.Weir, C. J. 1988.
Communicative Language Testing with Special Reference to English as aForeign Language.
Exeter: University of Exeter.Weir, C. J. 1993.
Understanding and Developing Language Tests.
New York: Prentice Hall.