Vous êtes sur la page 1sur 21

VALIDITY

Validity
The extent to which the test measure what it is

intended to measure and is useful for the purpose for


which it was designed;
Without validity, there can be no confidence in the
inference and the conclusion made from the test
results.

Kinds of Validity
Content Validity
Is concerned with the extent to which the test is
representative of a defined body of content
consisting of topics and processes;
Is establish through logical analysis of the
correspondence between the test items and the
content being concerned;

Kinds of Validity
Content Validity
Depend on the sampling of items; if the test items
adequately represents the domain of possible
items, the test has adequate content validity;
It is not done by statistical analysis but by the
inspection of items;
It is sometimes done by having a panel of expert
review the items on the test and rate them in terms
of how closely they match the objective or domain
specification.

Kinds of Validity
Content Validity
Face validity the appearance of validity to test users,
examiners and the examinees; it is not a technical
form of validity but important for the social
acceptability of the test.

Kinds of Validity
Criterion Validity
Involves the relationship or correlation between the
test scores and scores on some measurement
representing an identical criterion;
The criterion may be another test;
The correlation coefficient can be computed
between the scores on the test being validated and
the scores on the criterion;
The correlation coefficient used is called validity
coefficient.

Kinds of Validity
Kinds of Criterion Validity
1. Concurrent validity if the test scores on criterion
are obtained at the same time as the test scores;
Applies if it is desirable to substitute new test with
existing test;
Process:
(a) administer the two test measures with a short
intervening time period;
(b) correlate the scores by computing correlation
coefficient

Kinds of Validity
Kinds of Criterion Validity
2. Predictive validity is involved if we are concerned
about a test scores relationship with some criterion
measures in the future;
is particularly relevant for entrance examinations
and employment test;
common function: determine who is likely to
succeed in the future endeavour;
when test are used for the purpose of prediction,
we use the regression equation to compute
predictive validity.

Kinds of Validity
Construct Validity
A construct is an informed, scientifi c idea
developed or hypothesized to describe or explain
behavior.

e.g., intelligence, anxiety

Constructs are unobservable, presupposed

(underlying) traits that a test developer may invoke


to describe test behavior or criterion performance

Construct Validity
A test designed to measure a construct must
estimate the existence of an inferred, underlying
characteristics based on limited sample of behavior.
Is concerned with the psychological construct that

are reflected in the scores of a measure or test.

Kinds of Validity
Construct Validity
Extent to which test performance can be
interpreted in terms of one or more psychological
construct (Gronlund).

Implies that construct validation relies on


psychological theory that indicates the constructs
inderlying a set of test or measures.

Establishing construct validity involves both

logical analysis and empirical data.

Evidence of Construct Validity


1.
2.
3.
4.
5.
6.
7.

The test is homogeneous, measuring a single construct.


Test scores increase or decrease as a function of age, the
passage of time, or an
e xperimental manipulation as theoretically predicted.
Test scores obtained after some event or the mere passage
of time (that is, posttest
scores) differ from pretest scores as theoretically
predicted.
Test scores obtained by people from distinct groups vary
as predicted by the t heory.
Test scores correlate with scores on other tests in
accordance with what would be predicted from a theory
that covers the manifestation of the construct in question.

Approaches to Construct Validity


1.

Test homogeneity if a test measures a single


construct, then its component items (subtest)
likely to be homogenous (internally consistent);
- homogeneity is built into the test during the
development process;
- the aim of test developer is to select items that
form a homogenous scale;

Approaches to Construct Validity


1.

Test homogeneity
- the most common method is to correlate each
potential item with the total score and select
items which show high correlation with the
total score.
- homogeneity is an important first step in
certifying the construct validity of new test.

Approaches to Construct Validity


2. Appropriate Developmental Change many

construct can be assumed to show regular agegraded changes from early childhood into mature
adulthood and perhaps beyond. This is to
determine if they are consistent with the theory
of the construct.

Approaches to Construct Validity


3. Theory-Consistent Group Differences one way

to bolster the validity of a new instrument is to


show that, on average, persons with different
backgrounds and characteristic contain theoryconsistent scores on the test;
- persons though to be high on the construct
measured by the test should obtain high score,
whereas person with low amounts of construct
should obtain a low scores.

Approaches to Construct Validity


4. Theory-Consistent Intervention Effects another

approach to construct validation is to show that


test scores change in appropriate direction and
amount in reaction to planned or unplanned
intervention;

Approaches to Construct Validity


5. Convergent and Discriminant Validation

Convergent validation is demonstrated when a


test correlates highly with other variables or
test with which it shows an overlap of
construct;
Discriminant validation is demonstrated when a
test does not correlate with variables from
which it should differe

Approaches to Construct Validity


5. Convergent and Discriminant Validation

Campbell and Fiske (1959) proposed a systematic


experimental design for simultaneous
confirming the convergent and discriminant
validity of a psychological test;
- their design is called multitrait-multimethod
matrix and it calls for the assessment of two or
more traits by two or more method.

Approaches to Construct Validity


6. Factor analysis is a specialized technique that is

particularly useful for investigating contruct


validity;
- the purpose of this is to identify the minimum
number of determiners (factors) required to
account for the inter-connections among a
battery of test
e.g. a battery of 24 test represents only 4
underlying variables called factors.

Approaches to Construct Validity


6. Factor analysis

Factor is nothing more than a weighted linear


sum of the variables; that is, each factor is a
precise statistical combination of the test used
on the analysis;
- a factor is produced by adding in and carefully
determined portions of some test and perhaps
subtracting out fractions of other test.

Vous aimerez peut-être aussi