Vous êtes sur la page 1sur 21

Journal of Financial Economics 19 (1987) 195-215.

North-Holland

A BAYESIAN APPROACH TO TESTING


PORTFOLIO EFFICIENCY *

Jay SHANKEN

Received December 1986, final version received June 1987

This paper develops a Bay-esian test of portfolio efficiency~ and derives a computationally
convenient posterior-odds ratio. The analysis indicates that stgniticance levels higher than the
traditional 0.05 level are recommended for many test situations. in an example from the literature.
the classical test fails to rgect with p-value 0.082. yet the odds are nearly two to one against
efficiency under apparently reasonable assumptions. Procedures for testing approximate efficiency
and for aggregating subperiod results are also considered.

1. Introduction
In recent years, much progress has been made in developing and under-
standing multivariate tests of portfolio efficiency. Several tests have been
introduced and their distributional properties studied both analytically and in
simulations.* In addition, intuitive economic interpretations of the test statis-
tics, based on mean-variance portfolio geometry, have been provided. In all
applications thus far, inference is conducted in the classical statistical frame-
work with emphasis on the p-value or obsemed significance level of the test
statistic:3 the null hypothesis is rejected when the p-value is less than some
pre-assigned significance level, often taken to be 0.05 or 0.01.

*Financial support under the Batterymarch Fellowship Program is gratefully acknowledged.


This paper was presented in seminars at Stanford University. the University of Rochester, the
University of Illinois at Urbana-Champaign, the American Finance Asociation meetings, and the
NBER-NSF seminar on Bayesian Inference in Econometrics at Duke University. Thanks are due
to M. Gibbons. B. Lehmann. J. Long, D. IModest. C. Plosser, B. Schwert. P. Sequin. J. Warner.
and M. Weinstein for comments on an earlier draft. Computational assistance was provided by
P. Seguin.
See Gibbons (1982). Stambaugh (1982). Jobson and Korkie (1982). Shanken (1985. 1986).
Amsler and Schmidt (1985). Ma&inlay (1987). and Gibbons. Ross and Shanken (1986).
See Kandel (1984). Roll (1985). and Gibbons. Ross and Shanken (1986)
Posterior-odds ratios have been used in asset pricing tests by Brown and Weinstein (1983) and
Chen (1983). The statistical models underlying these applications presume independent and
identically distributed disturbances. however. and thus are not appropriate in the multivariate
context of this paper.

0304-403X/87/$3.50 E 1987. Elsevier Science Publishers B.V (Sorth-Holland)


Fisher (1959, p. 39) discusses the basis for traditional tests of significance,
emphasizing the logical disjunction: when the test statistic is significant, either
a rare event has occurred or the null hypothesis is false. Neyman and Pearson
develop a richer framework for inference by focusing on the power function of
the test, i.e., the probability of rejecting the null hypothesis under various
alternatives. The power function is useful in comparing different tests and in
evaluating the potential of a given test to resolve uncertainty about the null
hypothesis. Since performance is averaged over all possible realizations of the
test statistic. however, such considerations are relevant mainly from a pre-
experimental perspective. In contrast, this paper focuses on the conditional
aspects of inference, given the observed statistic.4
In some contexts, the tail area is the optimal critical region for a test at a
given significance level. In such cases, reporting the p-value as a summary
measure of statistical evidence permits researchers with different significance
levels to determine whether the null hypothesis is rejected by the data. The
appropriate level is generally not obvious, however, nor is it clear that the
p-value is the best summary measure available. Lindley (1957) and Jeffreys
(1961) emphasize that the p-value represents the chance, under the null
hypothesis, of the observed event and more extreme ones, whereas the chance
of the observed event itself is measured by the likelihood function. Intuitively.
the likelihoods under the null and the alternative would seem to be the
relevant quantities to compare.
An example due to Lindley and Phillips (1976) illustrates the differences
between classical inference and conditional inference through the likelihood
function. Say a coin is flipped twelve times, resulting in nine heads and three
tails. The flips are independent, each with probability 0 of coming up heads.
We wish to test the null hypothesis. B = 0.5, against the alternative, 0 > 0.5.
Two scenarios are of interest; in one, the number of flips is determined in
advance (binomial model); in the other, the coin is tossed until three tails are
observed (negative binomial model). In each case, the likelihood of nine heads
is proportional to B9(1 - 8)3. Since relative likelihoods are not affected by a
constant of proportionality, each result conveys the same information about 8
from a conditional perspective.
In contrast, classical analysis focuses on the probability of observing at feast
nine heads, under the assumption that 8 equals its null value 0.5. This
probability is 0.075 for the binomial model and 0.033 for the negative
binomial model. so the null hypothesis may be rejected in one case but not the
other. This is because the p-value incorporates probabilities for potential

Considerations of power. in the portfolio efficiency context, appear in Gibbons. Ross and
Shanken (1986) and MacKinlay (1987).
Recall that the likelihood function is just the probability density, viewed as a function of 6.
with the sample fixed.
outcomes that are not observed. Since the sample spaces for the two models
differ, it is not surprising that the associated p-values differ as well.
To interpret the likelihood function for this problem, note that By(l - 8)3
achieves a global maximum at B = 0.75. Alternate values of 19 less than 0.92
are more likely than the null value, 0.5. while higher values of 6 are less likely.
Given a prior belief about 8, a Bayesian weights the likelihoods under the
various alternatives and compares this average with the likelihood under the
null hypothesis. Unless values of 8 above 0.92 are considered a priori more
probable than other alternatives, the odds will favor the alternative hy-
pothesis, 8 > 0.5.
These issues are explored here in the context of the portfolio efficiency
problem. The results indicate that the impression conveyed by the (weighted)
likelihood or posterior-odds ratio often differs from a traditional interpretation
of the corresponding p-value. Confronted with related observations in other
contexts, some classical statisticians acknowledge a need to adjust the signifi-
cance level of a test to reflect factors such as sample size. Bayesian analysis
provides a rational framework for making this subtle judgment.
The remainder of the paper is organized as follows. Section 2 contains
definitions and basic results that are needed later. Section 3 develops a
Bayesian test of efficiency against a simple alternative and explores the
relation between p-value and posterior degree of belief. This relation is
examined more closely in section 4, in the context of a composite alternative,
using a computationally convenient formula for the posterior-odds ratio. Some
empirical results from the portfolio efficiency literature are re-assessed in
section 5. A test of approximate efficiency and a procedure for aggregating
subperiod results are also discussed. Section 6 contains concluding remarks.
Technical results and proofs are given in an appendix.

2. Preliminary concepts and definitions

Let R be an :V-vector of asset returns and P a K-vector of portfolio


returns. Returns are in excess of the riskless rate of interest, r. Assume that
the N + K components of R and P constitute a linearly independent set of
random variables. We wish to test whether some portfolio of the components
of P equals the tangency portfolio, t, determined by r, P, and R. If so, we say
that P is efficient. This coincides with the usual definition of portfolio
efficiency in the presence of a riskless asset, when K = 1. provided r is less
than the expected return on the global minimum variance portfolio of the
risky assets.
Consider the multivariate linear regression of R on P:

R=a+BPfe.
where a is an N-vector of intercepts, B an N X K matrix of regression
coefficients. and e an N-vector of disturbances with mean zero and covariance
matrix 2,. Let a^be the N-vector of OLS intercept estimates. obtained from N
separate excess return time series regressions, and let Z, be the usual unbiased
estimator of X:,. Define

0; = E( P)Z,E( P).

where E(P) is the mean vector and 2, the covariance matrix for P. BP2is the
maximum squared Sharpe measure of performance (ratio of expected excess
return to standard deviation of return) over all portfolios of the components of
P.6 Replacing E(P) and Z:, by their sample counterparts yields the maximum
likelihood estimator gP2.
P is efficient if and only if a = 0. This hypothesis can be tested using the
statistic

F= [N-T(T-N-K)/(T-K_l)]a^~,ci/(l+8,),

which is distributed as non-central F with degrees of freedom N and T - N - K


and non-centrality parameter

conditional on P. Under the null hypothesis, X = 0 and F has a central F


distribution.
To develop some intuition for A, consider an equivalent expression:

X = TB,?(
p-2 - l)/(I + $), (1)
where 0, is the tangency portfolio Sharpe measure and p = 0,/e, is the
multiple correlation between P and 1.* If P is efficient, then p = 1. In general,
p is between zero and one in magnitude and serves as a natural measure of the
relatioe efficiency of P.
Since our statistical analysis is conditioned on the time series of realizations
for P. the prior information set -includes the estimator &. The examples in
sections 3 and 4 suppose that eP = 0.5 (annualized). This is a reasonable

See Jobson and Korkie (1982. p. 443).


See Gibbons. Ross and Shanken (1986). The assumptions are that n. B. and 1, are constant
and that e is independently multivariate normally distributed over time. conditional on P. Also.
T-,V-K>O.
See Shanken (1987). An equivalent result is derived in Gibbons. Ross and Shanken (1986).
Related results are contained in Kandel and Stambaugh (1986) and Ma&inlay (1986).
J. Shunken. A Bu~esun resr ojporrjolroeficleq 199

number for, say, a diversified stock index with an average excess return of 10
percent and a standard deviation of 20 percent per annum. For simplicity, all
prior mass for the unknown parameter BP is placed on the ex post value &
This assumption is weakened later in the paper.
The specification of a prior distribution for the relative efficiency parameter
p is more difficult and could vary with either the number or variety of assets
included in the return vector R. With 19~= 0.5 (annualized), a value of p as low
as one-third seems unlikely, for this would imply that 19,= (3)(0.5) = 1.5 - a 30
percent risk premium on a standard deviation of 20 percent. Therefore, values
of p between 0.4 and 1.0 are emphasized in the examples below, although
enough information is provided that the reader can explore other scenarios.

3. Testing efficiency against a simple alternative


To illustrate the Bayesian approach. consider a test of efficiency (X = 0)
versus the simple alternative that h equals some h, greater than zero. Let T
be the prior probability that the null hypothesis is true and let f( Flh) be the
non-central F density function. By Bayess rule, the posterior probability that
the null is true, given the observed test statistic, is

and the posterior-odds ratio in favor of the alternative is

P(~=kAF) @--df(Flb)
P(X=OIF) = d-(W)
i.e., the prior-odds ratio times the likelihood ratio. Henceforth, we assume
that P = 1 - T = :, representing even prior odds. In this case, the posterior-
odds ratio equals the likelihood ratio. Assuming a fixed loss for an incorrect
inference and no loss otherwise, the null hypothesis is rejected when the odds
ratio exceeds one or, equivalently, when the posterior probability for the
alternative exceeds O.5.9
Suppose the F test for efficiency (K = 1) yields a p-value of 0.10, failing to
reject at the usual levels of significance. Assume the alternative of interest is
p = 0.5. Say N = 20 assets, T = 60 months. and da = 0.144 (0.144 F m = 0.5
annualized). With the prior distribution for 0p concentrated on BP, the non-
centrality parameter under the alternative is

X, = 60(0.021)(0.5-2 - 1)/1.021 = 3.70.

More complicated loss functions could arise in the context of a specific decision problem and
can be incorporated in the analysis. See DeGroot (1970) for a pood introduction to Bayesian
decision theory.
0.0 0.1 i.0 a.0 2.5 3.0

Fig. 1. Non-central F density function under the null hypothesis of efficiencv (p = 1) and two
alternatives. p is the measure of relative efficiency for the portfolio. The e&al line identifies
likelihood values for the observed F statistic. 1.61. with p-value 0.10. N = 20 assets and T= 60
time series observations.

The likelihood ratio in favor of the alternative, obtained numerically, is 1.60.


The associated posterior probability that the alternative is true is 1.6/(1 + 1.6)
= 0.62.
Now suppose the same p-value, 0.10, had been obtained using all 60 years
of data on the CRSP monthly return file (T= 720). Given the same prior
distribution for Sp and p as above, the non-centrality parameter is 44.42 and
the likelihood ratio in favor of the alternative is 0.03; thus, the odds are now
more than 30 to 1 in favor of the null hypothesis. This is a version of the
Lindley (1957) paradox: testing at a fixed significance level, regardless of
sample size, can lead to rejection of the null hypothesis in situations where the
odds are overwhelmingly in its favor.
The densities for T = 60 and T = 720 shown in figs. 1 and 2, respectively,
provide some intuition for the results above. In each figure, the area to the
right of the vertical spike under the density, p = l, equals the observed
p-value 0.10. The likelihood of each hypothesis is the ordinate of the associ-
ated density at the F value identified by the spike.
When T = 60, the observed F value lies in the right tail of the distribution
(tail probability = 0.20) under the alternative, p = 0.5. It is farther out in the
right tail of the null distribution (tail probability = 0.10) however. accounting
J. Shanken. A Bq.emn restof port/oh efficiency 01

0.0 0.1 1.0 1.s a.0 2.s 3.0 3.1 ..O 4.5 5.0

F STATISTX

Fig. 2. Eon-central F density function under the null hypothesis of efficiency (p = 1) and two
alternatives. p is the measure of relative efficiency for the portfolio. The vertical line identities
likelihood values for the observed f statistic. 1.43. with p-value 0.10. N = 20 assets and T= 720
time series observations.

for the likelihood ratio, 1.60, in favor of the alternative. When T= 720, the
non-centrality parameter is much larger and the observed statistic is in the
extreme left tail of the alternative distribution (tail probability = 1.00). In
other words, for this sample size much greater values of F are expected under
the alternative, so the likelihood ratio, 0.03, is very low.
One might conclude from this example that a reduction in the significance
level of the test should always accompany an increase in sample size. To see
that this is not the case, consider a test of efficiency, as above, but with the
simple alternative, p = 0.75, The non-centrality parameter for each sample size
is lower now, since the hypothetical deviation from efficiency is smaller. As
shown in figs. 1 and 2. the observation with p-value 0.10 is still in the right tail
(tail probability = 0.12) of the alternative distribution when T = 60, but is in
the central portion of the alternative distribution (tail probability = 0.59) when
T = 720. Accordingly, there is slight evidence in favor of the alternative
(likelihood ratio = 1.16) when T = 60 and much stronger evidence (likelihood
ratio = 2.04) when T = 720.
These examples demonstrate that the interpretation of a given p-value can
vary substantially from one context to another. Although sample size is an
important consideration, the mapping into a reasonable degree of belief also
202 J. Shunken. A Bqesran test ofportfolto effictenq

depends on ones prior belief about the relevant alternative(s). Given this
assessment, the evidence favors the hypothesis under which it is more likely
to have been observed. A related comparison involves the p-value and the
probability of rejection (at that level) under the alternative. Our examples
show that a comparison of these tail probabilities does not convey the same
information as the likelihood ratio. In each case, the tail probability under the
alternative increases with sample size, yet in one example the odds ratio
decreases, whereas in the other, it increases with T.
As noted earlier, the likelihood ratio is conditioned solely on the sample
evidence, whereas the event in the tail comparison consists of the observed
level of the statistic and more extreme values that are not observed. To a
Bayesian, all of the relevant sample information is contained in the likelihood
function for the observed statistic, and potential outcomes that have not
occurred have no bearing on the post-experimental interpretation of the data.

4. Testing efficiency against a composite alternative

The previous section develops a test of efficiency against a simple alternative


and considers the sensitivity of the resulting odds ratio to different values of
the alternative. Given a prior weighting of the alternatives, this information
can be incorporated in a single odds ratio. The discrete case with finitely many
alternatives is discussed first, followed by an analysis of the continuous case
for a particular class of prior distributions.*

4.1. The discrete case

Table 1 contains likelihood ratios for various levels of relative efficiency,


including those considered in the previous section. Numerical methods are
used in the absence of a closed-form formula for the non-central F density.i3

In both cases the tail probability under the alternative increases by a factor of about five
when T increases. Of course, all tail probabilities under the null hypothesis equal 0.10 by
assumption.
See Berger (1985). especially section 1.6. for an interesting discussion of these issues.
Zellners (1984) section 3.7 is a brief but stimulating survey of the odds-ratio literature. Also.
see Learner (1978. ch. 4) for a good introduction to the Bayesian framework in hypothesis testing.
The density can be expressed as the limit of an infinite series. See Johnson and Kotz (1970, p.
191). The approach taken here employs a subroutine for the non-central F distribution function
written by Bremner (1978). Since the density is the derivative of the distribution function, its value
at a given realization of F can be approximated by the change in the distribution function from
F - E to F+ F divided by 2~. for E small. The values reported in the paper employ E = 0.005.
Other values of e were considered and yield similar results. For large values of X (low values of p)
the error bound provided by the Bremner program indicates that the computations mq not be
very accurate. The seriousness of this problem is mitigated, however. by the fact that these values
receive little. if any, weight in our prior distributions. Nonetheless, the similar results obtained
using an alternate methodology introduced in section 4.2 are reassuring.
Table 1
Likelihood ratios ( LR) for a hypothetical test of portfolio efficiency with ,V = 20 assets and T
time series observations. under two assumptions about 7. 8,, the ex post Sharpe measure of
performance for the given portfolio, is assumed to be 0.5 (annualized),8 and the hypothetical
p-value is 0.10. The test statistic is distributed as non-central F with degrees of freedom IV and
T - .Y - 1 and non-centrality parameter h = T@j( p-? - l)/(l + 4;). where n is the level of
relative efficiency for the portfolio P.~

T=60 T=720
P x LR Power x LR Power d

1.00 0.0 1.00 0.10 0.0 1.00 0.10


0.95 0.1 1.02 0.10 1.6 1.32 0.15
0.90 0.3 1.05 0.11 3.5 1.66 0.23
0.85 0.5 1.08 0.11 5.1 1.95 0.33
0.80 0.7 1.12 0.12 8.3 2.10 0.45
0.75 1.0 1.16 0.12 11.5 2.04 0.59
0.70 1.3 1.21 0.13 15.4 1.71 0.73
0.65 1.7 1.28 0.14 20.2 1.16 0.86
0.60 2.2 1.36 0.16 26.3 0.59 0.94
0.55 2.9 1.47 0.18 34.1 0.19 0.99
0.50 3.7 1.60 0.20 44.4 0.03 1.00
0.45 4.9 1.76 0 24 58.3 0.00 1.00
0.40 6.5 1.96 0.29 77.8 0.00 1.00
0.3s 8.8 2.17 0.37 106.1 0.00 1.00
0.30 12.5 2.30 0.50 149.7 0.00 1.00
0.25 18.5 2.09 0.68 777
___. 1 0.00 1.00
0.20 29.6 1.13 0.89 354.4 0.00 1.00
0.15 53.6 0.09 1.00 643.4 0.00 1.00
0.10 122.2 0.00 1.00 1466.1 0.00 1.00
0.05 192.4 0.00 1.00 5908.8 0.00 1.00

The Sharpe measure. t7,, i,s the ratio of expected excess return to standard deviation of return.
p equals t?,/e,. where I IS the tangency portfolio determined by the 20 assets, p, and the
riskless asset. p equals one under the null hypothesis that p is efficient.
LR equals the likelihood under the alternative value of p divided by the likelihood under the
null value. p = 1. Annualized 8, fixed at 0.5.
Probability of rejecting the null hypothesis. p = 1, under the alternative value of p. Signiti-
cance level equals 0.10.

As above, the hypothetical p-value is 0.10. The power of a test of efficiency at


the 10 percent significance level is also given for each alternative. These tail
probabilities indicate the location of the observed F statistic in the overall
distribution under the alternative.
Given finitely many alternatives, the posterior-odds ratio against efficiency
equals the prior weighted average of the likelihood ratios. For example, with a
uniform prior over values of p from 0.40 to 0.95, the odds ratio is (1.96
+ -. . + 1.02)/12 = 1.34 when T = 60 and 1.06 when T = 720. In the former
case, the likelihood ratios are uniformly greater than one over the perceived
range of feasible alternatives. Thus, the decision to accept or reject is not
sensitive to the exact specification of the prior. In contrast, when T = 720, the
ratios are sometimes above and sometimes well below one within the feasible
range. Therefore, the appropriate inference is less clear in this case.
The uniform prior distribution above will be used in severai examples in this
paper. The mean value of p under the alternative is 0.684 for this prior, and
thus, unconditionally, E(p) = (OS)(l) i- (0.5)(0.68) = 0.84. The uniform distri-
bution is chosen for its simplicity as well as its rough correspondence with the
authors own prior beliefs.

4.2. A co~tinauas prior for X


When a point nufl hypothesis is to be tested against a continuum of
alternatives, Jeffreys (1961) suggests assigning some prior probability to the
null hypothesis and spreading out the remaining mass according to a continu-
ous density. This approach is adopted here in order to derive an explicit
formula for the posterior-odds ratio. An important computational advantage is
the fact that the final expression does not involve the non-central F density
function. The basic result is stated in:

Theorem 1. Let X equal zero bath positive prior probability r;, the remanning
prior mass distributed as Tc times a x2(N) variate, for some constant c.
~ond~tio~a~ on the ~lo~-ce~tral~t~parameter. X, let F be d~str~batedas non-ventral

IV_)/2
(T-
F with degrees of freedom N and T - N - K. Then the posterior-odds ratio

BF=
(1
+p)+ 1 (21
against the null hypothesis, X = 0, equals (1 - V)/V times the &yes factor

l+N(T-N-K)-F
l+N(T-N-K)-(l+Tc)-F

Proof, See the appendix.

The constant, c, in Theorem 1 is uniquely determined by the median value


of h under the alternative since

median(X) = CT rnedian~x~~~~.

If, as in section 3, we treat SP as given and let h vary with p, then

medianly) = 2?~[median(p)--- l],cl +ci,).


Therefore,

c=B,[l c~~f-i[median(p)--If/median(~fN)~. (31

Al1 else being equal, c increases with 8; and decreases with the median value
of p.
J. Shanken. A Bqresun fesr ofporrjol~o
ejicfenq 205

Given values of fl,. $,. and N, the &i-square prior for h mATheorem 1
induces a probability dtstribution on p. Consider the case dP= eP = 0.5 (an-
nualized) with N = 20. If median(p) = 0.5, 50 percent of the prior mass is
between p = 0.46 and p = 0.54. Eighty percent lies between 0.43 and 0.58 and
95 percent between 0.40 and 0.63. With median(p) = 0.75. the corresponding
intervals are (0.71, 0.79) (0.68, 0.82) and (0.65, 0.85). When N= 10, the
distributions for p are more dispersed, whereas for N = 30, they are more
concentrated about the median value of p.

4.3. Behavior of the posterior-odds ratio


The odds ratio in Theorem 1 exhibits some interesting properties. Since the
derivative of BF with respect to F is positive, the ratio is strictly increasing in
F, all else being equal, a larger test statistic implies a lower degree of belief in
the efficiency of P. With a fixed sample size, the odds remain bounded as F
tends toward either extreme:

lim BF= (1 + Tc)~-~-~)~E BF, > 1,


F-r
and
limBF=(1+Tc)-2~BF,<1.
F-O

Henceforth, we assume that r = 1 - 7~= :. Thus the odds against efficiency lie
in the open interval (BF,, BF,), which always contains 1.
The odds interval widens with increases in T or c, either of which shifts
prior probability toward larger values of X under the alternative. This makes
sense; as the power of the test increases, a stronger posterior conviction about
the validity of the null hypothesis becomes feasible. With 19~= &;,= 0.5 (annual-
ized). N = 20, T = 60, and median(p) = 0.75, the odds in favor of efficiency
never exceed 1.6 and the odds against efficiency are always less than 2.6. Thus,
no matter how small the p-value, the posterior probability that the index is
inefficient cannot be greater than 2.6/(1 + 2.6) = 0.72. If median(p) = 0.5, the
maximum-odds ratio for (against) efficiency is 5.8 (30.4). For larger values of
T, the potential odds against efficiency are much higher, whereas the potential
odds in favor of the null hypothesis increase more slowly.
With F fixed, the odds against efficiency approach zero as T + cx).14 Thus,
in large samples, a correspondingly large test statistic is needed to provide
evidence against the null hypothesis. This is another manifestation of the
Lindley paradox referred to earlier. Now suppose we let T + 30, but without
F fixed. This means that F is being viewed as a random variable so that BF,

Proofs of this and other facts stated in this paragraph are fairlv tedious and available from the
author.
Table 2
P-value corresponding to a posterior-odds ratio of 1.0, for different time series sample sizes. T. in
a test of portfolio efficiency with IV4020 assets. eb. the ex post Sharpe measure of performance for
the given portfolio. is assumed to be 0.5 (annualized).a The test statistic is distributed as
non-central F with degrees of freedom .V and f - 5 - I and non-centrality parameter X =
Tf?i(p- - t)/( I + 4;). where p is the level of relative efficiency for portfolio P_~ Prior odds
against eficienc); are even and the prior density For Q under the alternative is continuous with the
indicated median value.

medianf p) = 0.5 median{ p) = 0.75


T Ad p-value Power i A* p-valuee Power f
60 3.7 0.40 0.57 1.0 0.46 0.51
120 7.4 0.29 0.65 1.9 0.42 0.52
180 11.1 0.22 0.73 2.9 0.39 0.54
230 14.8 0.17 0.79 3.8 0.36 0.57
300 18.5 0.13 0.85 4.8 0.33 0.59
360 22.2 0.10 0.89 5.8 0.31 0.62
420 25.9 0.08 0.92 6.1 0.29 0.64
480 29.6 0.06 0.94 7.7 0.27 0.66
540 33.3 0.05 0.96 8.6 0.25 0.68
600 37.0 0.04 0.97 9.6 0.24 0.70
660 40.7 0.03 0.98 10.6 0.22 0.72
720 44.4 0.03 0.99 11.5 0.20 0.74

The Sharpe measure, 8,. is the ratio of expected excess return to standard deviation of return.
p equals 8,/e,. where t is the tangency portfolio determined by the 20 assets, p, and the
risklrss asset. p equals one under the null hypothesis that p is efficient.
c The density for p under the alternative is induced by the &i-square prior of Theorem 1. with
annualized B,, equal to 0.5.
j Median value of h under the alternative.
The p-value is the probability under the null hypothesis that the test statistic exceeds its
observed level.
Probability of rejecting the null hypothesis, p = 1. under the alternative corresponding to the
median value of X. The significance level of the test is the p-value given in the same row.

in (2). is a random variable as well. In this case, it can be shown that the odds
against efficiency approach infinity (almost surely) if the null hypothesis is
false and approach zero (in probability) if it is true.
To gain further insight into the relation between traditional and Bayesian
interpretations of the data, we examine how the significance level must change,
for various sample sizes, to maintain constant odds against efficiency. This
information is provided in table 2 for an odds ratio of 1.0 and two possible
prior distributions for p. The critical p-value for even odds declines as T
increases, from 0.40 to 0.03 with median(p) = 0.5, and from 0.46 to 0.20 with
median(p) = 0.75. This suggests using a significance level larger than the
conventional 0.05 value if T is small or if there is a prior belief that potential
deviations from the null hypothesis are not very large.

Brown and Klein (1986) also emphasize the need for a relatively large significance level when
testing a coefficient restriction on a simple regression modei with a small sample.
J. Shunken, A Bayexan test of portfolio eficlency 201

Table 3
P-value corresponding to a posterior-odds ratio of 1.5, for different time series sample sizes, T. in
a test of portfolio efficiency with N = 20 assets. 4,. the ex post Sharpe measure of performance for
the given portfolio, is assumed to be 0.5 (annualized). The test statistic is distributed as
non-central f with degrees of freedom .V and r - N - 1 and non-centrality parameter X =
T@:( p- - l)/(l + 6t:), where p is the level of relative efficiency for portfolio P.~ Prior odds
agamst efficiency are even and the prior density for p under the alternative is continuous with the
indicated median value.

median(p) = 0.5 median(p) = 0.75


T xd p-valuee Power Xd p-valuee Power

60 3.7 0.13 0.24 1.0 0.00 0.00


120 7.4 0.17 0.49 1.9 0.06 0.10
180 11.1 0.14 0.62 2.9 0.11 0.22
240 14.8 0.12 0.72 3.8 0.14 0.30
300 18.5 0.09 0.79 4.8 0.16 0.37
360 22.2 0.07 0.85 5.8 0.16 0.43
420 25.9 0.06 0.89 6.7 0.16 0.48
480 29.6 0.05 0.93 7.7 0.16 0.52
540 33.3 0.04 0.95 8.6 0.15 0.56
600 37.0 0.03 0.97 9.6 0.15 0.59
660 40.7 0.02 0.98 10.6 0.14 0.62
720 44.4 0.02 0.99 11.5 0.13 0.65

The Sharpe measure, ep, is the ratio of expected excess return to standard deviation of return.
p equals tJ,/S,. where t IS the tangency portfolio determined by the 20 assets. p, and the
riskless asset. p equals one under the null hypothesis that p is efficient.
The density for p under the alternative is induced by the &i-square prior of Theorem 1. with
annualized S,, equal to 0.5.
d Median value of X under the alternative.
The p-value is the probability under the null hypothesis that the test statistic exceeds its
observed level.
Probability of rejecting the null hypothesis, p = 1, under the alternative corresponding to the
median value of A. The significance level of the test is the p-value given in the same row.

In situations where a type I error is considered more serious than a type II


error, an odds ratio slightly greater than one may not constitute sufficient
evidence to reject the null hypothesis. This issue is explored in table 3 with the
critical-odds ratio equal to 1.5. As expected, the critical p-values are now
smaller, but they no longer decline monotonically with increased sample size.
In fact, with T = 60 and median(p) = 0.75, a p-value near zero is needed to
obtain an odds ratio of 1.5. The column labeled power in tables 2 and 3 gives
the upper tail probability associated with the critical F value under the
median alternative, i.e., when X equals its median value. This information
reveals that the inverse relation between p-value and T is established only
after the critical F value moves into the central portion of the alternative
distribution, consistent with observations in section 3.
5. Empirical examples

Two empirical results from the literature are reevaluated in this section from
a Bayesian perspective. The first study uses one long test period: the second
aggregates test results over several subperiods. The latter example is also
interesting for the way in which the Bayesian interpretation of the data differs
from a classical view.

5. I. A large sample example

Gibbons, Ross and Shanken (1986) reject efficiency of the CRSP value-
weighted index over the period 1926-1982 using N = 12 industry portfolios.
The F statistic with degrees of freedom 12 and 671 (T= 684) is 2.13 and the
associated p-value is 0.013. The annualized estimate of fIp is 0.38. The
information presented in table 4 indicates that the rejection of efficiency is
warranted: the likelihood ratios are well above one for most reasonable levels
of relative efficiency. With a uniform prior over alternatives from 0.40 to 0.95,
for example, the odds ratio against efficiency is 5.49. The associated posterior
probability that the index is efficient is l/(1 -t- 5.49) = 0.15.
Table 4 also presents posterior-odds ratios based on Theorem 1 for various
(median) values of p. The series of odds ratios is essentially a smoothed
version of the likelihood-ratio series, since the &i-square prior emphasizes
values of p in a neighborhood of the median. Thus, the maximum-odds ratio is
always lower than the maximum l~eiihood value of p. The two series convey
similar impressions about efficiency of the index. For example. the average
odds ratio over ps from 0.40 to 0.95 is 5.30. This average is the posterior-odds
ratio for a prior density that is a uniform mixture of chi-square priors. The
posterior probability for efficiency is 0.16.
Thus far, our prior distributions for ep have been concentrated on the
ex post values. Suppose now that (annua~zed) 8, equals 0.28, 0.38, or 0.48,
with prior probabilities of, say, 0.25, 0.50, and 0.25, respectively. Since the
non-centrality parameter, A, is directly related to BPand inversely related to p,
the likelihood function for the Gibbons, Ross and Shanken study peaks at a
higher value of p when flp= 0.48 and at a lower value when ep = 0.28. The
corresponding posterior-odds ratios for our standard uniform prior are 4.93
and 5.55, respectiveiy. I6 Therefore, the final odds ratio, taking all uncertainty
into account. is

(0.25)(5.55) + (0.50)(5.49) + (0.25)(4.93) = 5.37.

The odds have changed only slightly.

lhIn general. the conditional prior for p. given (2,. could vary with the level of 8,.
J. Shanken. A Buxeslan rest ofporrfolio effciencr 209

Table 4
Likelihood ratios and posterior-odds ratios against efficiency of the CRSP value-weighted index,
based on a study by Gibbons, Ross and Shanken (1986) over the period 1926-1982. N= 12
induatp portfolios and T= 684 months. The F statistic with degrees of freedom 12 and 671 is
2.13 with p-value 0.013. p is the level of relative efficiency for the index.a

Relative Posterior- Likelihood


efficiency odds ratiob ratio Power*

1.00 1.00 1.00 0.01


0.95 1.57 1.52 0.02
0.90 2.38 2.26 0.04
0.85 3.48 3.28 0.06
0.80 4.85 4.61 0.10
0.75 6.36 6.26 0.15
0.70 7.76 8.09 0.23
0.65 8.68 9.72 0.34
0.60 8.76 10.47 0.49
OS5 7.82 9.51 0.67
0.50 6.03 6.59 0.83
0.45 3.91 2.95 0.94
0.40 2.05 0.66 0.99
0.35 0.83 0.09 1.00
0.30 0.24 0.04 1.00
0.25 0.05 0.00 1.00
0.20 0.01 0.00 1.00
0.1: 0.00 0.00 1.00
0.10 0.00 0.00 1.00
0.05 0.00 0.00 1.00

p equals 0,/O,. where 6,(0,) is the ratio of expected excess return to standard deviation of
return for the Index (tangency portfolio). p equals one if the index is efficient.
hOdds-ratio computation assumes even prior odds and a continuous prior density for p under
the alternative. The density is induced by the &i-square prior of Theorem 1, with annualized BP
equal to 0.38. the ex post Sharpe measure for the period. In this case, the value of p in the first
column is the median.
Ratio equals the likelihood under the alternative value of p divided by the likelihood under the
null value. p = 1. Annualized 0, fixed at 0.38.
Probability of rejecting the null hypothesis, p = 1. under the alternative value of p. Signiti-
cance level equals 0.013.

5.2. Aggregation over subperiods

Ma&inlay (1987) fails to reject efficiency of the CRSP equal-weighted


index at the 0.05 level, over six five-year subperiods from 1954 to 1983. An
aggregate p-value of 0.082 is reported in his study, which is based on N = 20
size portfolios. Likelihood ratios for each subperiod as well as the overall
period are given in table 5, for an annualized BPof 0.54. This value is derived
from the mean excess return and an average standard deviation over the
subperiods. The overall likelihood ratio for each alternative is the product of
Table 5
Likelihood ratios against efficiency of the CRSP equal-weighted stock index based on a study by
Ma&inlay (1987). Tests are conducted over six five-year subperiods from 1954 to 1983 using
monthly data (T= 60) on N = 20 size portfolios. Aggregate p-value equals 0.082. ep. the ratio of
expected excess return to standard deviation of return on the index. is fixed at 0.54 (annualized).
p is the level of relative efficiency for the index.b

Subperiod
Relative 1954 1959 1964 1969 1974 1979 1954
efficiency -58 -63 -68 -73 -78 -83 -83

1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00


0.95 1.01 1.03 1.02 1.00 1.03 0.98 1.07
0.90 1.02 1.06 1.04 1.00 1.07 0.95 1.15
0.85 1.04 1.09 1.07 1.00 I.12 0.92 1.25
0.80 1.os 1.14 1.10 1.00 1.17 0.89 1.37
0.75 1.07 1.19 1.14 0.99 1.24 0.85 1.52
0.70 1.09 1.25 1.19 0.99 1.33 0.80 1.70
0.65 1.11 1.33 I.25 0.98 1.44 0.74 1.92
0.60 1.14 1.43 1.32 0.96 1.57 0.68 2.19
0.55 1.16 1.55 1.41 0.93 1.75 0.60 2.49
0.50 1.19 1.70 1.51 0.89 1.99 0.51 2.77
0.45 1.21 1.89 1.64 0.83 2.30 0.41 2.89
0.40 1.21 2.11 1.78 0.73 2.71 0.29 2.61
0.35 1.17 2.32 1.91 0.59 3.21 0.17 1.68
0.30 1.02 2.40 1.93 0.38 3.47 0.08 0.50
0.25 0.69 2.01 1.61 0.16 3.57 0.02 0.02
0.20 0.24 0.88 0.75 0.02 1.99 0.00 0.00
0.15 0.01 0.04 0.05 0.00 0.14 0.00 0.00
0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.05 0.00 0.00 0.00 0.00 0.00 0.00 0.0

The F statistics with degrees of freedom 20 and 39 for the six subperiods are 1.26. 1.63, 1.53.
1.00. 1.82, and 0.60, with p-values 0.26, 0.09. 0.13, 0.48, 0.05, and 0.89, respectively. The
maximum likefihood estimates, 4:. are 0.21. 0.021, 0.14. 0.026, 0.013. and 0.063.
p equals BP/B,. where I is the tangenr portfolio determined by the 20 size portfolios. the
index, and the nskless asset. p equals one If the index is efficient.
u Product of subperiod values.

the subperiod values and assumes that (i) the test statistics are independent
and (ii) the relative efficiency of the index is the same each period.
Again, the likelihood ratios are uniformly greater than one over the more
reasonable values of p. Therefore, the evidence favors the alternative despite
the failure to reject at the 0.05 level. Consistent with the information in table 2
regarding the even-odds p-value (T = 60),four of the six subperiod results tilt
the inference toward inefficiency. Interestingly, in the 1969-1973 and
1979-1983 subperiods, the likelihood ratios are less than one (apart from
rounding) for all alternatives.
Given our standard uniform prior, the odds ratio against efficiency is 1.91
and the posterior probability for efficiency is 0.34. Similar results are obtained
J. Shnnken. A Buyman testofportfolm eficrenc~ 211

Table 6
Posterior-odds ratios against efficiency of the CP cD equal-weighted stock index based on a study
by Ma&inlay (1987). Tests are conducted over six five-year subperiods from 1954 to 1983 using
monthly data (T = 60) on N = 20 size portfolios. Aggregate p-value equals 0.082. BP, the ratio of
expected excess return to standard deviation of return on the index. is fixed at 0.54 (annualized).
p is the level of relative efficiency for the index.b

Median Subperiod
relative 1954 1959 1964 1969 1974 1979 1954
efficiency -58 -63 -68 -73 -78 -83 -83d

1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00


0.95 1.01 1.03 1.02 1.00 1.03 0.98 1.07
0.90 1.02 1.06 1.05 1.00 1.07 0.95 1.15
0.85 1.04 1.10 1.07 1.00 1.12 0.92 1.26
0.80 1.05 1.14 1.11 1.00 1.18 0.89 1.38
0.75 1.07 1.20 1.15 0.99 1.25 0.84 1.54
0.70 1.09 1.26 1.20 0.98 1.34 0.80 1.72
0.65 1.11 1.34 1.26 0.97 1.45 0.74 1.95
0.60 1.14 1.44 1.33 0.95 1.60 0.67 2.22
0.55 1.16 1.56 1.41 0.93 1.78 0.60 2.51
0.50 1.18 1.71 1.52 0.88 2.02 0.51 2.77
0.45 1.20 1.89 1.64 0.82 2.33 0.41 2.86
0.40 1.19 2.08 1.76 0.72 2.71 0.30 2.54
0.35 1.13 2.25 1.85 0.58 3.14 0.19 1.63
0.30 0.98 2.25 1.82 0.39 3.46 0.10 0.53
0.25 0.68 1.87 1.51 0.20 3.24 0.03 0.04
0.20 0.30 0.98 0.81 0.05 1.99 0.01 0.00
0.15 0.05 0.18 0.16 0.00 0.44 0.00 0.00
0.10 0.00 0.00 0.00 0.00 0.01 0.00 0.00
0.05 0.00 0.00 0.00 0.00 0.00 0.00 0.00

The F statistics with degrees of freedom 20 and 39 for the six subperiods are 1.26, 1.63. 1.53.
1.00, 1.82. and 0.60. with p-values 0.26, 0.09, 0.13, 0.48, 0.05, and 0.89, respectively. The
maximum likelihood estimates, 6;. are 0.21, 0.021, 0.14. 0.026. 0.013, and 0.063.
p equals ep/,O,, where t is the tangency portfolio determined by the 20 size portfolios, the
index. and the nskless asset. p equals one if the index is efficient.
Odds-ratio computation assumes even prior odds and a continuous density for p under the
alternative. The density is induced by the &i-square prior of Theorem 1, with annualized 0, equal
to 0.54. The median in column 1 is based on this density.
d Product of subperiod values.

using the BFs given in table 6. I7 If we let B take on the (annualized) values
0.44, 0.54, or 0.64 with prior probabilities asfn section 5.1, then the posterior
probability for efficiency is 0.35.

Multiplication of the subperiod odds ratios to obtain an overall posterior-odds ratio is not.
strictly speaking. justified. In principle. we should use the posterior distribution of X derived from
the Arst-subperiod test as the prior distribution for the second-subperiod test, etc. The problem is
that the posterior density need not be a cm-square density and thus cannot be used as a prior
density in Theorem 1. However, since the subperiod odds ratios are basically smoothed versions of
the subperiod likelihood ratios, the product of the odds ratios can still be viewed as a smoothed
version of the overall likelihood-ratio series.
5.3. A test of approximate efficiency

As discussed in section 4.2, if a portfolio is inefficient, then the odds against


efficiency become infinite as T * x. This is true even if the deviation from
efficiency is negligible, raising the question, does it make sense to test a
sharp hypothesis such as exact efficiency.w Shanken (1987) and Kandel and
Stambaugh (1987) show that a null hypothesis of approximate efficiency arises
naturally when the efficiency of an unobservable portfolio is in question and a
proxy for the portfolio is available. For example, the Sharpe-Lintner version
of the capital asset pricing model (CAPM) implies that the market portfolio of
all assets is efficient: furthermore, given a subset of assets, the portfolio, m.
that is maximally correlated with the market portfolio is efficient for the
subset.lY Since the market portfolio is not observable. the exact composition of
m is unknown.
Suppose the correlation between m and a given proxy, p, is assumed to be
at least, say, 0.90. If the CAPM is true, then this must also be a lower bound
on the correlation between the proxy and the (subset) tangency portfolio, i.e.,
p 2 0.90. Testing this composite hypothesis requires a prior weighting for p
under the null hypothesis as well as the alternative. For example, assigning
probability t to each of the null values {1.0,0.95,0.90} and probability $ to
each of the ten alternatives {0.85,0.80,. . . ,0.40} gives even prior odds for
approximate efficiency. Combining this prior with the likelihood ratios in table
4 (Gibbons, Ross and Shanken) yields an odds ratio of

3.90 = { (3.28 + 4.61 + . . . +0.66),20}/{(1.00 + 1.52 + 2.26)/6}.

The posterior probability that p 2 0.90 is 0.20. The corresponding probabil-


ity is 0.34 for the Ma&inlay study.?

6. Conclusions
The examples presented in this paper demonstrate that much can be learned
by inspecting the likelihood function for the portfolio efficiency problem, even
in the absence of a fully specified prior distribution. Although the Bayesian

The practical significance of rejecting a sharp hvpothesis with a large sample of data is
questioned by Berkson (1938).
See Breeden (1979. En. 8).
Shanken (1987) suggests a frequentist procedure for testing approximate efficiency conditional
on an assumption about the value of $. The p-value is the value of the power function. for the
usual exact test. evaluated at the lower bound for p. At p = 0.90, the p-value reported in table 4 is
0.04. still less than the traditional 0.05 level.
It may seem odd that this probabilitv does not exceed the probability for exact efficiency
computed earlier. The apparent contradjction is resolved by noting that the two posterior
probabilities are computed under ditTerent priors and are therefore not comparable.
J. Shonken. A Buyesran rest ojporrjolroejicrenq 213

interpretation has been emphasized. inference through the likelihood function


is of interest more generaily. The eminent statistician R.A. Fisher, a critic of
subjective probability, advocated inspection of this function as a means of
obtaining information about the parameters of a model.22 Furthermore,
Birnbaum (1962) has provided an elegant axiomatic basis for the likelihood
principle that does not appeal to the notion of subjective probability.23
Two approaches to the efficiency problem have been developed here: a
direct analysis of the likelihood function through numerical approximations
and a computationally simpler approach for a particular class of prior distri-
butions. Our applications suggest that the loss of information inherent in
this simplification is outweighted by the computational advantages. Exact
odds against efficiency can be obtained for any prior density that is a
weighted average of basic chi-square densities; the odds are computed directly
from the classical F statistic used by Gibbons, Ross and Shanken (1986) and
MacKinlay (1987).4
One interesting observation concerns the interpretation of a given p-value
for experiments differing only in sample size. Lindley (1957) argues that, in
sufJiciently large samples, a fixed p-value, however small, will constitute strong
evidence in favor of the (point) null hypothesis. This is often motivated, in
classical terms, by suggesting that some of the increased power or reduction in
type II error that accompanies an increase in sample size should be used to
reduce the type I error as well, i.e., the significance level of the test should be
lowered. Yet we have encountered reasonable circumstances in which a
p-value of, say, 0.10, constitutes stronger evidence against the null hypothesis
in the larger sample than in the smaller sample.2s
Future research might re-examine the efficiency question in the context of a
specific investment decision problem. It would also be desirable to extend the
present framework to deal with more general asset pricing restrictions, for

See Fisher (1959. pp. 68-75).


Inference through the likelihood function. as discussed here. should not be confused with
classical likelihood-ratio analysis. which involves a test of significance and thus, apart from power
considerations. ignores the distribution of the test statistic under the alternative.
A more ambitious and much more complicated approach to this problem would start with a
joint prior distribution for all parameters in the multivariate linear regression of returns (see
section 2). The joint posterior distribution for these parameters, given observed returns. would be
a function of the usual sufficient statistics. ci. i, and 2,. as would the implied posterior
distribution for X. Whether the latter distribution must depend on the sufficient statistics solely
through the F statistic is an open question. If not, there may be some loss of information in
conditioning the posterior analysis on the F statistic as we have.
This could occur in any application involving an F test and is not limited to the portfolio
efficiency context. Note that the Hotelling T test and the equivalent F test are multivariate
generalizations of the simple two-sided test of p = 0 considered by Lindley. Lindley also assumes
that the variance of the underlying normal population is known.
example, efficiency when a riskless asset is not available.j The likelihood
perspective could prove useful in event study empirical work as well.

Appendix: Proof of Theorem 1

Let f( Flh) be the non-central F density with degrees of freedom N and


T - N - K and non-centrality parameter X. Let g(h) be the density for X
under the alternative. By Bayess rule, the posterior-odds ratio against efficiency
is just the ratio of weighted likelihoods under the null and alternatives:

Since f( FJh)g(X) is the joint density of F and X under the alternative, the
integral over X, in (A.l), is the marginal density of F (under the alternative).
But F is conditionally (on X) distributed as non-central F and h is distrib-
uted as CT times a x2(N) variate. It follows from the lemma in Shanken
(1985, p. 347) that F is (marginally) distributed as 1 + Tc times a central F
variate with degrees of freedom N and T - N - K. Therefore, using the
standard formula for the density of a transformed variable, we have

(A.2)

In principle, this integral is computed over the open interval (0, x), although
inclusion of the end point does not affect the result. The conclusion of
Theorem 1 now follows directly from (A.l), (A.2), and the fact that

References
Amsler. C. and P. Schmidt. 1985, A Monte Carlo investigation of the accuracy of multivariate
CAPM tests. Journal of Financial Economics 14. 359-376.
Berger. J., 1985, Statistical decision theory and Bayesian analysis (Springer-Verlag, New York).
Berkson. J.. 1938, Some difficulties of Interpretation encountered in the application of the
chi-square test, Journal of the American Statistical Association 33, 526-536.
Bimbaum. A., 1962, On the foundations of statistical inference, Journal of the American
Statistical Association 57. 269-306.
Bremner. J.. 1978, Mixtures of beta distributions: Algorithm AS 123. Applied Statistics 27.
104-109.

A first cut at the zero-beta problem might employ the riskless asset framework with the
zero-beta rate playing the role of the riskless rate. The sensitivity of the odds ratio could be
explored in a neighborhood of the T-bill rate or an estimate of the zero-beta rate.
J. Shanken, A Bayeslan rest ofporrfolio eyjinenc? 215

Brown. S. and R. Klein, 1986, Model selection in the federal courts: An application of the
posterior odds ratio criterion, in: P. Goel and A. Zellner. eds., Bayesian inference and decision
techniques (Elsevier Science Publishers. Amsterdam).
Brown. S. and M. Weinstein, 1983, A new approach to testing asset pricing models: The bilinear
paradigm, Journal of Finance 38. 711-743.
Chen. N.. 1983, Some empirical tests of the theory of arbitrage pricing. Journal of Finance 38.
1393-1414.
DeGroot. &I.. 1970, Optimal statistical decisions (McGraw-Hill, New York).
Fisher. R., 1959, Statistical methods and statistical inference (Oliver and Boyd. London).
Gibbons. M.. 1982. Multivariate tests of financial models: A new approach, Journal of Financial
Economics 10. 3-27.
Gibbons. M.. S. Ross and J. Shanken, 1986, A test of the efficiency of a given portfolio. Research
paper no. 853 (Graduate School of Business, Stanford University, Stanford, CA).
JetTreys. H.. 1961. Theory of probability (Oxford University Press. London).
Jobson. J.D. and R. Korkie, 1982, Potential performance and tests of portfolio efficiency, Journal
of Financial Economics 10, 433-466.
Johnson. N. and S. Kotz, 1970, Continuous univariate distributions. Vol. 2 (Wiley. New York).
Kandel. S.. 1984, The likelihood ratio test statistic of mean-variance etTiciency without a riskless
asset. Journal of Financial Economics 13. 575-592.
Kandel. S. and R. Stambaugh, 1987. On correlations and the sensitivity of inferences about
mean-variance efficiency, Journal of Financial Economics 18, 61-90.
Learner. E.. 1978. Specification searches (Wiley. New York).
Lindley. D.. 1957. A statistical paradox. Biometrika 44. 187-192.
MacKinlay. C.. 1987. On multivariate tests of the CAPM, Journal of Financial Economics 18,
341-371.
Mood. A.. F. Graybill and D. Boes. 1974. Introduction to the theory of statistics (McGraw-Hill,
New York).
Roll. R.. 1985. A note on the geometry of Shankens CSR T test for mean/variance efficiency,
Journal of Financial Economics 14, 349-357.
Savage. L.. 1954. The foundations of statistics (Wiley, New York).
Savage. L.. 1962, The foundations of statistical inference (Wiley. New York).
Shanken. J.. 1985, Multivariate tests of the zero-beta CAPM. Journal of Financial Economics 14,
327-348.
Shanken. J.. 1986. Testing portfolio efficiency when the zero-beta rate is unknown: A note, Journal
of Finance 41, 269-276.
Shanken. J., 1987, Multivariate proxies and asset pricing relations: Living with the Roll critique,
Journal of Financial Economics 18, 91-110.
Stambaugh. R., 1982. On the exclusion of assets from tests of the two-parameter model: A
sensitivity analysis, Journal of Financial Economics 10, 237-268.
Zellner. A.. 1984. Basic issues in econometrics (University of Chicago Press. Chicago. IL).

Vous aimerez peut-être aussi