Académique Documents
Professionnel Documents
Culture Documents
103
(11.42)
for t = 1, . . . , n and j = 0, . . . , J. Here, the vector , intended to denote all the parameters of the alternative hypothesis, is to be replaced by a vector whose rst k components are those of the vector in the conditional logit model (11.36) and whose last m components are the k of the nested logit model (11.40). In this particular case, m = 2. For the test of the conditional logit model, we must specify all the ingredients of regression (11.42) for that model. It is to be understood that we wish to test the specication (11.36) against the alternative specication (11.40), where we impose the constraints of the conditional logit model, that is, we require j = for all j = 0, . . . , J . Thus, under the null hypothesis, we have tj = exp(Wtj )
J l=0
exp(Wtl )
The derivatives with respect to the k components of are obtained by summing the derivatives tj = tj Wtl (jl tl ), l which were derived in the solution to Exercise 11.21, over l = 0, . . . , J and setting all the l equal to the common . We obtain, for h = 1, . . . , k , that tj = tj h
J
(Wtl )h (jl tl ),
l=0
where (Wtl )h denotes the h th element of Wtl . Since there are just two subsets of outcomes, the index i of the Ai takes on just two values, 1 and 2. For j = 0, 1, we have that i(j ) = 1, and so tj tj = tj (vtj t0 vt0 t1 vt1 ) and = tj t2 vt2 . 1 2 Copyright c 2003, Russell Davidson and James G. MacKinnon
104
t2 t2 = t2 (t0 vt0 + t1 vt1 ) and = t2 (vt2 t2 vt2 ). 1 2 The inclusive values, hti , are given by (11.39). With a common parameter vector and with k = 1, they are ht1 = log exp(Wt0 ) + exp(Wt1 ) Thus the quantities vtj , j = 0, 1, 2, are vt0 = ht1 Wt0 , vt1 = ht1 Wt1 , and vt2 = ht2 Wt2 = 0. and ht2 = log exp(Wt2 ) = Wt2 .
This last result, that vt2 = 0, implies that tj /2 = 0 for j = 0, 1, 2. As we could have suspected, the fact that the second group is a singleton means that 2 cannot be identied, and so we cannot test its value. Thus, in this case, the articial regression (11.42) has 3n observations and just one testing regressor. For observation t, the regressand is 1/2 t1 (dt1 t1 ) , t2
1/2
t0
1/2
(dt0 t0 ) (dt2 t2 )
t0
1 /2
t 0
1 /2
Of course, all of the tj and vtj here are to be evaluated at the ML estimates of the conditional logit model. The easiest test statistic is the explained sum of squares, which should be distributed as 2 (1). The t statistic on the testing regressor is not suitable, unless it is multiplied by the standard error of the regression, in which case it is exactly equal to the square root of the explained sum of squares.