Vous êtes sur la page 1sur 5

PSYCHOMETRIC

ASSIGNMENT-2

CONCURRENT
VALIDITY

Submitted To:

Submitted By:

Dr. Anurakti Mathur

Chunauti Duggal

AIPS

MA-Counselling

Amity University

Psychology-4th Sem
A1503313026

CONCURRENT VALIDITY
Psychological assessment is an important part of both experimental research and clinical
treatment. One of the greatest concerns when creating a psychological test is whether or not it
actually measures what we think it is measuring. For example, a test might be designed to
measure a stable personality trait, but instead measure transitory emotions generated by
situational or environmental conditions. A valid test ensures that the results are an accurate
reflection of the dimension undergoing assessment.
Validity is the extent to which a test measures what it claims to measure. It is vital for a test to
be valid in order for the results to be accurately applied and interpreted.
Validity isnt determined by a single statistic, but by a body of research that demonstrates the
relationship between the test and the behavior it is intended to measure. There are three types
of validity:
1. Content Validity
When a test has content validity, the items on the test represent the entire range of possible
items the test should cover. Individual test questions may be drawn from a large pool of items
that cover a broad range of topics.
In some instances where a test measures a trait that is difficult to define, an expert judge may
rate each items relevance. Because each judge is basing their rating on opinion, two
independent judges rate the test separately. Items that are rated as strongly relevant by both
judges will be included in the final test.
2. Criterion-related Validity
A test is said to have criterion-related validity when the test has demonstrated its
effectiveness in predicting criterion or indicators of a construct. There are two different types
of criterion validity:

Concurrent Validity occurs when the criterion measures are obtained at the same
time as the test scores. This indicates the extent to which the test scores accurately estimate
an individuals current state with regards to the criterion. For example, on a test that
measures levels of depression, the test would be said to have concurrent validity if it
measured the current levels of depression experienced by the test taker.

Predictive Validity occurs when the criterion measures are obtained at a time after
the test. Examples of test with predictive validity are career or aptitude tests, which are
helpful in determining who is likely to succeed or fail in certain subjects or occupations.
3. Construct Validity
A test has construct validity if it demonstrates an association between the test scores and the
prediction of a theoretical trait. Intelligence tests are one example of measurement
instruments that should have construct validity.
Concurrent validity thus is a measure of how well a particular test correlates with a
previously validated measure. It is commonly used in social science, psychology and
education.
The tests are for the same, or very closely related, constructs and allow a researcher to
validate new methods against a tried and tested stalwart.
For example, IQ, Emotional Quotient, and most school grading systems are good examples of
established tests that are regarded as having a high validity. One common way of looking at
concurrent validity is as measuring a new test or procedure against a gold-standard
benchmark.

Importance of Timing
As the name suggests, concurrent validity relies upon tests that took place at the same time.
Ideally, this means testing the subjects at exactly the same moment, but some approximation
is acceptable.
For example, testing a group of students for intelligence, with an IQ test, and then performing
the new intelligence test a couple of days later would be perfectly acceptable.
If the test takes place a considerable amount of time after the initial test, then it is regarded as
predictive validity. Both concurrent and predictive validity are subdivisions of criterion
validity and the timescale is the only real difference.

Example
Researchers give a group of students a new test, designed to measure mathematical aptitude.
They then compare this with the test scores already held by the school, a recognized and
reliable judge of mathematical ability. Cross referencing the scores for each student allows
the researchers to check if there is a correlation, evaluate the accuracy of their test, and decide
whether it measures what it is supposed to. The key element is that the two methods were
compared at about the same time.

If the researchers had measured the mathematical aptitude, implemented a new educational
program, and then retested the students after six months, this would be predictive validity.
Imagine that you are a psychologist developing a new psychological test designed to measure
depression called the 'Rice Depression Scale'. Once your test is fully developed, you decide
that you want to make sure that it is valid; in other words, you want to make sure that the test
accurately measures what it is supposed to measure. One way to do this is to look for other
tests that have already been found to be valid measures of your construct, administer both
tests, and compare the results of the tests to each other.
Since the construct, or psychological concept, that you want to measure is depression, you
search for psychological tests that measure depression. In your search, you come across the
Beck Depression Inventory, which researchers have determined through several studies is a
valid measure of depression. You recruit a sample of individuals to take both the Rice
Depression Scale and the Beck Depression Inventory at the same time. You analyze the
results and find that the scores on the Rice Depression Scale have a high positive
correlation to the scores on the Beck Depression Scale. That is, the higher the individual
scores on the Rice Depression Scale, the higher their score on the Beck Depression Inventory.
Likewise, the lower the score on the Rice Depression Scale, the lower the score on the Beck
Depression Inventory. You conclude that the scores on the Rice Depression Scale correspond
to the scores on the Beck Depression Inventory. You have just established concurrent validity.

Advantages
It provides an expedient means by which a validation coefficient can be obtained. There is
no delay in obtaining data.
It is a highly appropriate means of validating instruments used for assessment of current
attributes (eg instruments used in management development to diagnose current strengths and
limitations).

Disadvantages
If the test is intended for use in selection concurrent validity is theoretically less appropriate
than predictive validity because a concurrent validation result cannot show that the test is
related to future performance. Concurrent validity is therefore less suited to instruments used
to assess potential, rather than current attributes.
Concurrent validity is based on a sample of job incumbents. In completing the tests
incumbents may differ from job applicants in terms of their motivations. In particular,
incumbents completing self-report questionnaires will tend to be less motivated to give
socially desirable responses than job applicants. Validity based on job incumbents cannot,
therefore, be directly generalised to the selection situation.

Limitations

Concurrent validity is regarded as a fairly weak type of validity and is rarely accepted on its
own. The problem is that the benchmark test may have some inaccuracies and, if the new test
shows a correlation, it merely shows that the new test contains the same problems.
For example, IQ tests are often criticized, because they are often used beyond the scope of the
original intention and are not the strongest indicator of all round intelligence. Any new
intelligence test that showed strong concurrent validity with IQ tests would, presumably,
contain the same inherent weaknesses.
Despite this weakness, concurrent validity is a stalwart of education and employment testing,
where it can be a good guide for new testing procedures. Ideally, researchers initially test
concurrent validity and then follow up with a predictive validity based experiment, to give a
strong foundation to their findings.

Vous aimerez peut-être aussi