Vous êtes sur la page 1sur 12

Austral. & New Zealand J. Statist.

41(1), 1999, 6777


THE ASYMPTOTIC POWER OF JONCKHEERE-TYPE
TESTS FOR ORDERED ALTERNATIVES
HERBERT B

UNING
1
AND WOLFGANG K

OSSLER
2
Freie Universit at and Humboldt-Universit at
Summary
For the c-sample location problem with ordered alternatives, the test proposed by Barlow
et al. (1972 p. 184) is an appropriate one under the model of normality. For non-normal
data, however, there are rank tests which have higher power than the test of Barlow et al.,
e.g. the Jonckheere test or so-called Jonckheere-type tests recently introduced and studied
by B uning & K ossler (1996). In this paper the asymptotic power of the Jonckheere-type
tests is computed by using results of H ajek (1968) which may be considered as extensions
of the theorem of Chernoff & Savage (1958). Power studies via Monte Carlo simulation
showthat the asymptotic power values provide a good approximation to the nite ones even
for moderate sample sizes.
Key words: nonparametric tests; non-normality; asymptotic relative efciency; nite sample power;
Jonckheere-type tests.
1. Introduction
Testing the equality of several means against ordered alternatives is an important statisti-
cal problem. Parametric tests like the test of Barlow et al. (1972 p. 184) are generally applied
under the assumption of normality. For non-normal data the nonparametric test of Jonck-
heere (1954) is one of the most familiar tests for ordered alternatives. The corresponding test
statistic is based on pairwise MannWhitney statistics. It has been shown that the Jonckheere
test has high power in comparison to the test of Barlow et al. for symmetric and medium-
to long-tailed distributions (see B uning & K ossler, 1996). For other shapes of distributions,
in Section 2 we consider modications of the Jonckheere test, using two-sample statistics
other than that of MannWhitney, e.g. statistics of Gastwirth (1965) and Hogg, Fisher &
Randles (1975). The corresponding tests are very efcient for short-tailed and asymmetric
distributions respectively (see B uning & K ossler, 1996). In the following, we call these tests
Jonckheere-type tests. In Section 3 we use results of Chernoff & Savage (1958) and H ajek
(1968) to derive the asymptotic normality of such (properly standardized) Jonckheere-type
statistics under alternatives. Section 4 presents the formula for the asymptotic power func-
tion of Jonckheere-type tests. Table 2 gives asymptotic relative efciencies (AREs) of some
Jonckheere-type tests, assuming symmetric distributions with short tails up to very long tails
as well as asymmetric distributions. In Section 5 we compare the asymptotic power values of
Received August 1997; revised July 1998; accepted July 1998.

Author to whom correspondence should be addressed.


1
Institut f ur Statistik und

Okonometrie, FU Berlin, Boltzmannstr. 20, D-14195 Berlin, Germany.
email: buening@wiwiss.fu-berlin.de
2
Institut f ur Informatik, HU Berlin, Unter den Linden 6, D-10999 Berlin, Germany.
email: koessler@informatik.hu-berlin.de
Acknowledgments. The authors thank the referee for his useful comments on this paper.
c Australian Statistical Publishing Association Inc. 1999. Published by Blackwell Publishers Ltd,
108 Cowley Road, Oxford OX4 1JF, UK and 350 Main Street, Malden MA 02148, USA
68 HERBERT B

UNING AND WOLFGANG K

OSSLER
selected Jonckheere-type tests with the power values calculated by Monte Carlo simulation.
It turns out that the asymptotic power generally provides a good approximation to the nite
power even in the case of moderate sample sizes.
2. Jonckheere-type Tests for Ordered Alternatives
We consider the location model in which X
i1
, . . . , X
in
i
(i = 1, . . . , c), are independent
random variables with absolutely continuous distribution function F(x
i
) (
i
IR), and
density function f. In the following we assume that F is twice (continuously) differentiable on
(, ) except for at most a countable number of x-values. Here f

denotes the derivative


of f, where it exists, and it is dened to be zero otherwise; f

is assumed to be bounded.
We wish to test H
0
:
1
= =
c
versus H
1
:
1

c
with at least one strict
inequality. For this test problem, we consider so-called Jonckheere-type tests introduced by
B uning & K ossler (1996). Here we use slight modications of these tests, the statistics of
which are dened by
JT =
c

i=2
N
i
T
(1,...,i1)i
,
with T
(1,...,i1)i
=
N
i
k=1
a(N
i
, k)V
ik
, a(N
i
, k) IR, N
i
=

i
t =1
n
t
, 1 i c, and
V
ik
=
_
1 if X
i(k)
belongs to the ith sample,
0 otherwise,
where X
i(k)
is the kth order statistic of the combined i samples (X
11
, . . . , X
1n
1
), . . . ,
(X
i1
, . . . , X
in
i
) and T
(1,...,i1)i
is a two-sample linear rankstatistic computedonthe i th sample
versus the combined data in the rst i 1 samples. The associated -level test rejects H
0
in
favour of H
1
, if JT is at least as large as the upper 100th percentile for the null distribution
of JT.
For the Jonckheere-type statistic we get, under H
0
,
E(JT) =
c

i=2
N
i

k=1
n
i
a(N
i
, k),
and because of the independence of the statistics T
(1,...,i1)i
(see Terpstra, 1952) it is easy to
verify that
var(JT) =
c

i=2
N
i

k=1
N
i
N
i1
n
i
N
i
1
_
a(N
i
, k) a
N
i
_
2
, where a
N
i
=
1
N
i
N
i

k=1
a(N
i
, k).
We assume
min(n
1
, . . . , n
c
) with
n
i
N

i
, N =
c

i=1
n
i
, 0 <
i
< 1, i = 1, . . . , c. (2.1)
Then, under H
0
, the limiting distribution of (JT E(JT))/(var(JT))
1/2
is standard normal
(cf. the proposition below). Thus the critical value JT
1
can be approximated by JT
1
=
c Australian Statistical Publishing Association Inc. 1999
THE ASYMPTOTIC POWER OF JONCKHEERE-TYPE TESTS 69
E(JT) +z
1
(var(JT))
1/2
where z
1
is the upper 100th percentile of the standard normal
distribution. Thus we reject H
0
if JT JT
1
. Now we assume that the scores a(N
i
, k) are
generated by a so-called scores-generating function , i.e.
a(N
i
, k) =
_
k
N
i
+1
_
,
(see e.g. Randles & Wolfe, 1979 p. 272). In general, the optimal score function, correspond-
ing to the test which is asymptotically most powerful for detecting a shift in the distribution
function G, is given by
(u, g) =
g

_
G
1
(u)
_
g
_
G
1
(u)
_ , 0 < u < 1,
where g is the density function of G (see e.g. Randles & Wolfe, 1979 p. 299).
Now we assume
:=
_
1
0
(u, g) du = 0 (2.2)
and the Fisher Information I (g) is nite, i.e.
I (g) =
_
1
0

2
(u, g) du < . (2.3)
We list some examples of Jonckheere-type tests with specially chosen two-sample scores
and its scores-generating function (u) (0 < u < 1), and give, in parentheses, that type of
distribution for which the test has high power.
Example 1. Gastwirth test G (short tails):
a
G
(N
i
, k) =
4
N
i
+1
_

_
k
1
4
(N
i
+1) if k
1
4
(N
i
+1),
0 if
1
4
(N
i
+1) < k <
3
4
(N
i
+1),
k
3
4
(N
i
+1) if k
3
4
(N
i
+1),

G
(u) =
_

_
4u 1 if 0 < u
1
4
,
0 if
1
4
< u <
3
4
,
4u 3 if
3
4
u < 1.
Example 2. MannWhitneyWilcoxon test MW (medium tails):
a
MW
(N
i
, k) = 2
k
N
i
+1
1

MW
(u) = 2u 1.
Example 3. Test LT (long tails):
a
LT
(N
i
, k) =
_

_
1 if k <
_
1
4
N
i
_
+1,
4k
N
i
+1
2 if
_
1
4
N
i
_
+1 k
_
3
4
(N
i
+1)
_
,
1 if k >
_
3
4
(N
i
+1)
_
,

LT
(u) =
_

_
1 if 0 < u <
1
4
,
2(2u 1) if
1
4
u
3
4
,
1 if
3
4
< u < 1.
c Australian Statistical Publishing Association Inc. 1999
70 HERBERT B

UNING AND WOLFGANG K

OSSLER
Example 4. HoggFisherRandles test HFR (right-skewed):
a
HFR
(N
i
, k) =
_

_
k
N
i
+1

3
8
if k
1
2
(N
i
+1),
1
8
otherwise,

HFR
(u) =
_
u
3
8
if 0 < u
1
2
,
1
8
if
1
2
< u < 1.
The resulting Jonckheere-type tests based on the Gastwirth scores, the MannWhitney
Wilcoxon scores, the LT-scores and the HoggFisherRandles scores are abbreviated to JG,
JMW, JLT and JHFR, respectively. The MannWhitneyWilcoxon test MW, the Gastwirth test
G, the long tail test LT and the HoggFisherRandles test HFR with their scores-generating
functions
MW
,
G
,
LT
, and
HFR
, respectively, are the asymptotically most powerful rank
tests for detecting a shift in a logistic (L), uniform-logistic (U-L), logistic-double exponential
(L-D) and logistic-exponential (L-E) distribution, respectively. The densities of the last three
distributions are given by
f
U-L
(x) =
_

_
1
2
e
x+2
(1 +e
x+2
)
2
if x 2,
1
8
if 2 < x < 2,
1
2
e
x2
(1 +e
x2
)
2
if x 2;
f
L-D
(x) =
_

_
3
2
e

3x
_
1 +e

3x
_
2
if |x| c
L-D
,
1
4
e
c
L-D
|x|
if |x| > c
L-D
,
where c
L-D
=
1

3
ln[(

3 +1)/(

3 1)] (see Hall & Joiner, 1983);


f
L-E
(x) =
_

_
9
32
e
3x/8
(1 +e
3x/8
)
2
if x c
L-E
,
3

2
16
e
x/8
if x > c
L-E
,
where c
L-E
=
8
3
ln 2.
Thus, we have (u, f
U-L
) =
G
(u), (u, f
L
) =
MW
(u), (u, f
L-D
) =
LT
(u),
(u, f
L-E
) =
HFR
(u) and the assumptions above are obviously satised where I (f
U-L
) =
1
6
,
I (f
L
) =
1
3
, I (f
D-L
) =
2
3
and I (f
L-E
) =
5
192
.
Proposition. Under assumptions (2.1), (2.2) and (2.3), the limiting distribution of JT/
JT
is,
under H
0
, the standard normal with parameter

2
JT
=
1
3
_
N
3

i=1
n
3
i
_
I (g),
where g is that density function for which the two-sample test in the denition of the test JT
is asymptotically most powerful.
c Australian Statistical Publishing Association Inc. 1999
THE ASYMPTOTIC POWER OF JONCKHEERE-TYPE TESTS 71
Proof. The two-sample statistics T
(1,...,i1)i
are, under H
0
, asymptotically normally dis-
tributed with expectation zero and variance n
i
N
i1
I (g)/N
i
(see H ajek & Sid ak, 1967 The-
orem V.1.5b). Therefore JT, being a linear combination of independent normal variates with
expectation zero, is itself normal with expectation zero.
The formula for
2
JT
is derived in Theorem 1.
3. Asymptotic Normality of Jonckheere-type Statistics Under Location Alternatives
In this section we study the limiting distribution of Jonckheere-type statistics JT under
H
1
. First, as linear combinations of two-sample rank statistics T
(1,...,i1)i
, the JT-statistics
are asymptotically normally distributed JT N(
JT
,
2
JT
). The independence of the statistics
T
(1,...,i1)i
(i = 2, . . . , c), even under H
1
follows from Terpstra (1952).
Theorem1 gives the parameters
JT
and
2
JT
. The proof is based on the theoremof Cher-
noff & Savage (1958) and H ajek (1968) obtained its generalization (see also Govindarajulu,
LeCam & Raghavachari, 1967).
Theorem 1. Under the assumptions
(i) min(n
1
, . . . , n
c
) with n
i
/N
i
N =
c

i=1
n
i
(0 <
i
< 1, i = 1, . . . , c); (see (2.1))
(ii)

2
i
N
i1
n
i
N
i
b
2
i
(0 b
i
< , i = 2, . . . , c 1, b
c
> 0),
where we assume
1
= 0 without loss of generality;
(iii) the score-generating function (u) of the two-sample statistic in the denition of JT is
non-decreasing, square integrable, and absolutely continuous on [0, 1];
(JT
JT
)/
JT
has a limiting standard normal distribution with

JT
=
c

i=2

i
n
i
(N
i1
+N
i
N) d(f, g),
where
d(f, g) =
_
1
0

(u, g)f (F
1
(u)) du (3.1)
and

represents the derivative of almost everywhere, and

2
JT
=
1
3
_
N
3

i=1
n
3
i
_
I (g).
Proof. Part 1. First, consider the two-sample case. Let X
11
, . . . , X
1n
1
and X
21
, . . . , X
2n
2
denote independent random variables with the distribution functions F(x) and F(x
2
),
respectively, where F has a density function f. Let
T =
N
2

k=1
a(N
2
, k)V
k
, N
2
= n
1
+n
2
c Australian Statistical Publishing Association Inc. 1999
72 HERBERT B

UNING AND WOLFGANG K

OSSLER
denote a linear rank statistic and
V
k
=
_
1 if X
(k)
belongs to the second sample,
0 otherwise,
where X
(k)
is the kth order statistic of the combined two samples, k = 1, . . . , N
2
. Under
certain strong assumptions on the scores-generating function (u) Chernoff & Savage (1958)
have shown that the limiting distribution of (T
T
)/
T
is standard normal with

T
=
2
n
1
n
2
N
2
d(f, g), (3.2)

2
T
=
n
1
n
2
N
2
_
1
0

2
(u, g) du =
n
1
n
2
N
2
I (g) (3.3)
(see also H ajek & Sid ak, 1967 p. 237). The formulae (3.2) and (3.3) are also true under the
milder condition (iii) of Theorem 1;

is then the derivative of almost everywhere. This


follows from a generalization of the theorem of Chernoff & Savage (1958) obtained by H ajek
(1968 Theorem 2.3). H ajeks idea is based on the fact that the set of polynomials is a dense
subset of the L
1
-space of integrable functions.
Part 2. The two-sample statistic T
(1,...,i1)i
is concerned with alternatives described by
(F

i1
(x), F(x
i
)) where
F

i1
(x) =
n
1
N
i1
F(x) +
n
2
N
i1
F(x
2
) + +
n
i1
N
i1
F(x
i1
)
is a mixture distribution function. By using Taylor expansions we obtain from the generaliza-
tion of the ChernoffSavage theorem (see H ajek & Sid ak, 1967 p. 234)
E(T
(1,...,i1)i
) n
i
_
+

_
N
i1
N
i
F

i1
(x) +
n
i
N
i
F(x
i
)
_
dF(x
i
)
= n
i
_
+

_
i

t =1
n
t
N
i
F(x
t
)
_
dF(x
i
)
= n
i
_
+

_
i

t =1
n
t
N
i
F(x +
i

t
)
_
dF(x)
= n
i
_
+

_
F(x) +
_
i1

t =1
n
t
N
i
(
i

t
)
_
f (x) +O(
i

1
)
2
_
dF(x)
= n
i
__
1
0
(u) du +
i1

t =1
n
t
N
i
(
i

t
)
_
+

(F(x))f (x) dF(x) +O(


i

1
)
2
_

i
N
i1
n
i
N
i

i1

t =1

t
n
t
n
i
N
i
_
d(f, g) =
T
(1,...,i1)i
with d(f, g) dened in (3.1). From H ajek (1968 Theorem 2.3),
var(T
(1,...,i1)i
)
N
i1
n
i
N
i
I (g) =
2
T
(1,...,i1)i
.
c Australian Statistical Publishing Association Inc. 1999
THE ASYMPTOTIC POWER OF JONCKHEERE-TYPE TESTS 73
Part 3. For the Jonckheere-type statistic JT =

c
i=2
N
i
T
(1,...,i1)i
we obtain immediately
E(JT) =
c

i=2
N
i
E
_
T
(1,...,i1)i
_

i=2
_

i
N
i1
n
i

i1

t =1

t
n
t
n
i
_
d(f, g)
=
c

i=2

i
n
i
_
N
i1
+N
i
N
_
d(f, g) =
JT
,
and because of the independence of the two-sample statistics T
(1,...,i1)i
,
var(JT) =
c

i=2
N
2
i
var(T
(1,...,i1)i
)
c

i=2
N
i
N
i1
n
i
I (g) =
1
3
_
N
3

i=1
n
3
i
_
I (g) =
2
JT
.
4. Asymptotic Power Function of Jonckheere-type Tests
The asymptotic power function
JT
n
of the JT-test is given by

JT
n
() = P

_
JT E(JT)

var(JT)
> z
1
_
1
_
z
1


JT

JT
_
with n = (n
1
, . . . , n
c
), = (0,
2
, . . . ,
c
) and
JT
/
JT
= A(n, )c(f, g), where
A(n, ) =
c

i=2

i
n
i
(N
i1
+N
i
N)
_
1
3
_
N
3

c
j=1
n
3
j
_
and c(f, g) =
d(f, g)

I (g)
.
Under assumptions (i) and (ii) of Theorem 1, the function A(n, ) converges to A > 0, with
A =

3
c

i=2
b
i
_

i1
_
1/2
_

i1
+
i
1
_
_
1
c

j=1

3
j
_
1/2
,
where
i
=

i
t =1

t
. Theorem 2 follows.
Theorem 2. The asymptotic power function
JT
of the JT-test is given by
JT
(A) = 1
(z
1
A c(f, g)).
Remarks. The function
JT
agrees with the asymptotic power function of the test based on
the statistic V considered by Puri (1965). Thus the JT-test and the V-test are asymptotically
equivalent, a result which is not immediately obvious. Puris V is based on
1
2
c(c 1) two-
sample comparisons (each sample with each other), whereas for the statistic JT we have just
c 1 comparisons of the statistics T
(1,...,i1)i
(i = 2, . . . , c). If, for example, c = 5, Puris
V includes ten comparisons; JT includes only four. Furthermore, the two-sample statistics
dening V are dependent under H
0
and H
1
in contrast to the statistics T
(1,...,i1)i
dening
JT. Thus, the formula for the variance of JT is much simpler than that of V . Notice that the
JT-test and the V-test are generally not equivalent, i.e. there are rank congurations which
yield different decisions for the JT-test and the V-test.
c Australian Statistical Publishing Association Inc. 1999
74 HERBERT B

UNING AND WOLFGANG K

OSSLER
TABLE 1
Values of c(f, g) for some JT-tests
Test Uni Norm Dexp Exp
JG 4.8990 0.9397 0.6124 2.4495
JMW 3.4641 0.9772 0.8660 1.7321
JLT 2.4495 0.9121 0.9186 1.2247
JHFR 3.0984 0.8740 0.7746 2.3238
TABLE 2
AREs of some JT-tests with respect to the most powerful rank tests
Test Norm Logistic Dexp Cauchy CN
JG 0.883 0.781 0.375 0.161 0.546
JMW 0.955 1.000 0.750 0.608 0.592
JLT 0.832 0.945 0.844 0.814 0.517
JHFR 0.764 0.800 0.600 0.486 0.712
In order to present some graphs of the asymptotic power function depending on A we
have to calculate values of the functions c for the tests and distributions to be considered.
Table 1 gives such values for the JG-, JMW-, JLT- and JHFR-tests from Section 2 and for
four distributions, the uniform (Uni), normal (Norm), double exponential (Dexp) and the
exponential (Exp).
Figures 14 are derived from these values.
Table 2 presents Pitman AREs for the four tests compared with the asymptotically most
powerful rank tests under normal and double exponential distributions, as well as for the logis-
tic, the Cauchy and the contaminated normal distribution CN = 0.5N(1, 4) +0.5N(1, 1)
which is skewed to the right. For the concept of Pitman ARE see e.g. Noether (1955) and
B uning & Trenkler (1994).
Note that the JMW-test behaves well for symmetric distributions with medium to long
tails, the JLT-test behaves well for symmetric distributions with very long tails, and the JHFR-
test behaves well for asymmetric distributions. For similar results in the two-sample case see
also Randles & Wolfe (1979 p. 313).
5. Simulation Study
A Monte Carlo study (10,000 replications) was run to nd out whether the asymptotic
power function can be approximated well by nite power values calculated for large, medium
or small sample sizes. We assumed uniform, normal, logistic, double exponential, Cauchy,
contaminated normal and exponential distributions. These seven distributions cover a broad
class of symmetric and asymmetric distributions with tails ranging from short to very long.
To compare asymptotic and nite power values of the JG-, JMW-, JLT- and JHFR-test here we
selected only four distributions: the uniform, normal, double exponential and the exponential.
The critical values of the Jonckheere-type statistics were approximated by the standard normal
distribution (see Section 2). We consider the case of c = 3 samples with equal sizes n
1
=
n
2
= n
3
= 10, 40 and 160. The amount of shift is determined by the parameters
i
= k
i

F
(i = 1, 2, 3) where
F
is the standard deviation of the underlying distribution function F.
We select two values of the variable A in the asymptotic power function, namely A =
0.6

5 and A =

5, to get power values of different orders of magnitude. For the given


c Australian Statistical Publishing Association Inc. 1999
THE ASYMPTOTIC POWER OF JONCKHEERE-TYPE TESTS 75
0 0. 5 1 1 . 5 2
0. 2
0. 4
0. 6
0. 8
1
JG
JMW
JLT
JHFR
P
o
w
e
r
A
0
.
0 1 2 3 4 5
0
0. 2
0. 4
0. 6
0. 8
1
P
o
w
e
r
A
JG
JMW
JLT
JHFR
.
Fig. 1. Asymptotic power of four Jonckheere-
type tests under uniform distribution
Fig. 2. Asymptotic power of four Jonckheere-
type tests under normal distribution
0 1 2 3 4 5 6
0
0. 2
0. 4
0. 6
0. 8
1
P
o
w
e
r
A
JG
JMW
JLT
JHFR
.
0 0. 5 1 1 . 5 2 2. 5 3
0. 2
0. 4
0. 6
0. 8
1
0
P
o
w
e
r
A
JG
JMW
JLT
JHFR
.
Fig. 3. Asymptotic power of four Jonckheere-
type tests under double exponential distribution
Fig. 4. Asymptotic power of four Jonckheere-
type tests under exponential distribution
TABLE 3
Sample sizes and shift parameters
n
i
h
j
A
10 (0, 0.3, 0.6) 0.6

5
(0, 0.5, 1.0)

5
40 (0, 0.15, 0.3) 0.6

5
(0, 0.25, 0.5)

5
160 (0, 0.075, 0.15) 0.6

5
(0, 0.125, 0.25)

5
equal sample sizes n
1
= n
2
= n
3
= 10, 40 and 160 the vectors h
j
= (k
1j
, k
2j
, k
3j
) with

ij
= k
ij

F
, i = 1, 2, 3, j = 1, 2, then can be calculated from the formula for A(n, ) in
Section 4. Table 3 lists the components of the vectors h
1
and h
2
.
Table 4 presents asymptotic and nite power values under the models given above. For
each test and sample size, the two rows of power values correspond to the two different
parameter vectors in Table 3.
Table 4 shows that for all tests considered the estimated power approaches the asymptotic
power with increasing sample size. Even for moderate sample sizes (n = 40) the approxi-
mation works well except for the JG-test under the uniform distribution and generally for the
exponential distribution where the convergence is slower than for the other distributions. The
JLT-test is an exception here.
The Jonckheere-type test JMW is the best for the normal distribution, but for the other
c Australian Statistical Publishing Association Inc. 1999
76 HERBERT B

UNING AND WOLFGANG K

OSSLER
TABLE 4
Power values (%) of some JT-tests
Test Asymp 160 40 10 Asymp 160 40 10
Uniform Normal
JG 59.95 56.83 54.78 48.62 35.04 34.29 33.71 32.62
93.53 91.32 87.21 81.29 67.59 66.70 65.51 60.97
JMW 38.08 37.16 36.40 33.71 36.92 36.46 36.09 35.92
72.27 70.26 68.23 64.46 70.54 69.60 69.03 67.15
JLT 24.31 23.67 24.55 26.08 33.68 33.81 33.28 32.86
47.45 47.55 47.86 50.21 65.34 64.80 64.34 62.99
JHFR 32.82 31.67 31.30 28.49 31.83 31.76 30.25 29.40
63.87 61.41 59.03 55.69 62.15 61.17 59.12 56.74
Double Exponential Exponential
JG 31.45 31.07 31.01 30.11 94.96 90.06 82.94 56.54
61.47 60.14 60.00 56.71 99.99 99.81 98.68 77.18
JMW 49.92 48.45 47.69 44.95 75.14 70.78 66.75 58.15
86.29 85.25 82.92 79.06 98.71 97.00 94.49 88.20
JLT 53.90 53.40 51.93 47.35 49.92 50.78 51.05 53.61
89.61 88.24 86.49 80.40 86.29 86.29 87.32 86.78
JHFR 43.04 41.21 39.95 36.26 92.96 89.63 85.36 77.32
78.94 76.29 74.00 66.15 99.98 99.79 99.46 97.18
distributions specially chosen Jonckheere-type tests are better than the JMW-test, i.e. the JG-
test for the uniform, the JLT-test for the double exponential and the JHFR-test and the JG-test
for the exponential distribution.
The good approximation of nite power by the asymptotic power function even in the
case of moderate sample sizes, holds also for the other distributions included in our study.
These power tables can be obtained from the authors on request. For further power compar-
isons of distribution-free tests against ordered alternatives see Rao & Gore (1984), Kochar &
Kochar (1989), Kumar, Gill & Mehta (1994) and B uning (1997).
6. Conclusions
In this paper, the goals were, rstly, to show that nite power values can be approxi-
mated well by the asymptotic power function even in the case of moderate sample sizes, and
secondly, to show that there are modications of the Jonckheere-type test JMW which have
higher (asymptotic and nite) power than the JMW-test under departures from normality. But
usually the practising statistician has no exact information about the underlying distribution
of the data. Thus an adaptive (distribution-free) test should be applied which takes the given
dataset into account. B uning (1995b) gives power results of adaptive tests based on the con-
cept of Hogg et al. (1975) and compares them with the test of Barlow et al. (1972) and with
the single Jonckheere-type tests in the adaptive scheme. B uning (1995a) discusses adaptive
tests in the c-sample location problem compared with two-sided alternatives.
References
BARLOW, R.E., BARTHOLOMEW, D.J., BREMNER, J.M. & BRUNK, H.D. (1972). Statistical Inference Under
Order Restrictions. New York: Wiley.
B

UNING, H. (1995a). Adaptive tests for the c-sample location problem the case of two-sided alternatives.
Comm. Statist. A Theory Methods 25, 15691582.
(1995b). Adaptive Jonckheere-type tests for ordered alternatives. Diskussionspapier Nr. 7 des Instituts f ur
Statistik und

Okonometrie der Freien Universit at Berlin.
c Australian Statistical Publishing Association Inc. 1999
THE ASYMPTOTIC POWER OF JONCKHEERE-TYPE TESTS 77
(1997). Robust analysis of variance. J. Appl. Statist. 24, 319332.
& K

OSSLER, W. (1996). Robustness and efciency of some tests for ordered alternatives in the c-sample
location problem. J. Statist. Comput. Simulation 55, 337352.
& TRENKLER, G. (1994). Nichtparametrische Statistische Methoden. 2. v ollig neu bearbeitete Auage.
Berlin: De Gruyter.
CHERNOFF, H. & SAVAGE, I.R. (1958). Asymptotic normality and efciency of certain nonparametric test
statistics. Ann. Math. Statist. 29, 972994.
GASTWIRTH, J.L. (1965). Percentile modications of two-sample rank tests. J. Amer. Statist. Assoc. 60,
11271140.
GOVINDARAJULU, J., LECAM, L. & RAGHAVACHARI, M. (1967). Generalizations of theorems of Chernoff and
Savage on the asymptotic normality of test statistics. In Proceedings of the 5th Berkeley Symposium on
Mathematical Statistics and Probability, Vol. 1. Berkeley: University of California Press.
H

AJEK, J. (1968). Asymptotic normality of simple linear rank statistics under alternatives. Ann. Math. Statist.
39, 325346.
& SID

AK, Z. (1967). Theory of Rank Tests. New York: Academic Press.


HALL, D.L. &JOINER, B.L. (1983). Asymptotic relative efciencies of R-estimators of location. Comm. Statist.
A Theory Methods 12, 739763.
HOGG, R.V., FISHER, D.M. & RANDLES, R.H. (1975). A two-sample adaptive distribution-free test. J. Amer.
Statist. Assoc. 70, 656661.
JONCKHEERE, A.R. (1954). A distribution-free k-sample test against ordered alternatives. Biometrika 41,
133145.
KOCHAR, A. & KOCHAR, S.C. (1989). Some distribution-free tests for testing homogeneity of location param-
eters against ordered alternatives. J. Indian Statist. Assoc. 27, 18.
KUMAR, N., GILL, A.N. & MEHTA, G.P. (1994). Distribution-free test for homogeneity against ordered alter-
natives. Comm. Statist. A Theory Methods 23, 12471257.
NOETHER, G.E. (1955). On a theorem of Pitman. Ann. Math. Statist. 26, 6468.
PURI, M.L. (1965). Some distribution-free k-sample rank tests of homogeneity against ordered alternatives.
Comm. Pure Appl. Math. 18, 5163.
RANDLES, R.H. & WOLFE, D.A. (1979). Introduction to the Theory of Nonparametric Statistics. New York:
Wiley.
RAO, K.S.M. & GORE, A.P. (1984). Testing against ordered alternatives in one-way layout. Biometrical J. 26,
2532.
TERPSTRA, T.J. (1952). The asymptotic normality and consistency of Kendalls test against trend, when ties are
present in one ranking. Indag. Math. 14, 327333.
c Australian Statistical Publishing Association Inc. 1999
Copyright of Australian & New Zealand Journal of Statistics is the property of Wiley-Blackwell and its content
may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express
written permission. However, users may print, download, or email articles for individual use.

Vous aimerez peut-être aussi