Académique Documents
Professionnel Documents
Culture Documents
Heres how to choose small portfolios of projects without using invalid statistical measures.
Technology-based projects often face large uncertainties about the cost and timescale of the work and the timing and magnitude of the benets that will ow from them (1). At their outset, and often for much of their lifetime, such projects face a range of outcomes all of which are plausible, although some may be judged to be more likely than others. Often there is the possibility that the project will be terminated before making any money. In deciding whether to pursue such projects, or in choosing between them, managers may be called upon to place a value on each one and compare those values. When projects are in their early stages, nancial information may be unavailable or unreliable, so it is then usual to employ a scoring system in which non-nancial factors such as market growth rates or levels of competition are included to give a more broadly-based evaluation (2). However as projects mature, scoring methods must eventually give way to nancial measures even though considerable uncertainty may remain. This is the situation that our current work addresses. tion. His doctorate is in informatics from the cole des Mines de Paris. fhhunt@glam.ac.uk David Probert heads the Centre for Technology Management. He pursued an industrial career with Marks and Spencer and Philips for 18 years before returning to Cambridge in 1991. He joined the Engineering Department as Royal Academy of Engineering/Lucas Industries Research Fellow to develop a practical approach to the issues of make or buy and vertical integration in the manufacturing industry, which has been widely applied and disseminated. Now Reader in Technology Management, his current research interests include technology and innovation strategy, technology management processes, technology valuation, technology intelligence and software sourcing. drp@eng.cam.ac.uk The work described in their paper has been carried out at the EPSRC Innovative Manufacturing Research Centre, Institute for Manufacturing, Cambridge University Engineering Department.
43
We are concerned in this paper primarily with projects, such as product or technology developments, in which the uncertainties are mainly internal (relating, for example, to the organizations ability to master and exploit the opportunity) and so cannot generally be hedged by taking options on uncorrelated investments. The particular issues addressed here are: 1. How best to characterize risk and reward when considering only a very small number of projects, where statistical concepts such as average, probability or variance are of dubious value. (Broadly speaking, we dene small as fewer than 10 projects. With fewer than 10 one cannot ignore the individual uncertainties in the projects.) 2. How to compare and contrast the values of a small number of projects with different levels of risk and return. 3. How to select an optimum small portfolio of risky projects on a rational basis, without resort to a subjective balancing of risk and reward as is assumed when using techniques such as the classic risk-reward diagram (25). Our approach to these issues is simple and logical but it does require a rethinking of what risk and return really mean for single projects and small portfolios. Most methods for valuing and selecting portfolios of risky projectsincluding those based on nancial option techniques (6)assume that the number of projects involved is large. The assumption is almost always implicit but is betrayed by the use of statistical ideas such as probability, expectation and variance. Such concepts give insight only when dealing with large numbers of cases since only then can one be condent that, for example, the sample average will be close to the expected value. However, these concepts are so familiar that it is all too easy to fall back on them even when their application is not really theoretically justied. It requires an act of will to think in other terms. In practice, portfolios of projects are rarely large enough for statistical concepts to be applicable; to use them can lead to serious misunderstandings as well as unnecessary complexity, as we show below. The value that can be assigned to a proposed or partiallycompleted project is not the same as the value of a tradable asset such as a used car or an equity stock. Tradable assets have a current value that is real in the sense that it can be realized in the marketplaceimmediately in the case of equity stocks, often less quickly for physical assets. Partially-completed projects, however, cannot generally be traded, although they may generate some saleable assets such as patents. Their worth lies in the benets they may deliver at some future time to the organization in which they are rooted. In valuing such a project one is not making an estimate of something that
Portfolios of projects are rarely large enough for statistical concepts to be applicable.
exists now but rather attempting to predict a future situation. A project valuation is a forecast, not an estimate. The value of a project and the value of a nancial stock differ in another important respect. Stock values are determined in part by their perceived level of risk (or volatility): risky stocks earn higher returns than safe ones (7 ). This link is mediated by the market because any change in the perceived risk generates trading that drives a corresponding change in the stock price. However, there is no such automatic link between risk and value for real projects. The most risky projects do not necessarily offer the greatest rewards, and an increase or reduction in uncertainty does not of itself make the eventual outcome more or less valuable to the company. To be sure, resolving uncertainty in a project may make it possible to reduce costs and consequently increase the value, but this is not necessarily so. To use a trivial example, the value of a project to harvest an orchard full of apples is not necessarily increased by making a better estimate of the weight of the crop. That depends on whether the new estimate will allow changes to the way the project is run, for example by allowing the farmer to negotiate a better price or to improve the efciency of picking. Valuation of Projects with Risk The simplest project with risk would be one that incurs a xed cost, C, but whose income is uncertain. If there are two possible outcomes, yielding income I1 with probability p or income I2 with probability (1 - p), then the usual method of valuing the project would be to say that the expected value of the income is I1*p + I2*(1 - p) and so the expected value V, of the project is
I1*p
I2*(1
p)
Eq. 1
If the uncertainty lies in the possibility that the project may be cancelled or otherwise end in failure, then I2 may be zero, and
V 44
I1*p
Eq. 2
Research . Technology Management
Table 1.Financial Parameters for a Simple Project Including Risk. Probability of success of this phase (regardless of others) 25% 80% 100% Probability of failure of this phase (regardless of others) 75% 20% 0% Probability of failure at this phase after success at previous ones 75% 5% 0%
Here the effect of the uncertainty is simply to reduce the expected income. If the margin on the project is small, any value of p less than unity will reduce, and may wipe out, the predicted prot. The underlying assumption in this calculation is that the project will proceed to conclusion and the result will become clear only when all the cost has been incurred. More often a risky project would proceed through a number of stages with the possibility that managers will stop or modify it as events unfold. As is fairly well known, taking this into account may substantially affect the valuation calculation (8). Consider a typical example of an innovation project that has three phases, with the possibility that the project may be stopped after either of the rst two phases if the prospects do not look attractive. A project manager contemplating such a project might list the nancial parameters as in Table 1. The simple nancial calculation that ignores the phasing would show that this project has a probability of 20% of
achieving the income of 35M, giving a risk-weighted income of 7M. Since the costs are 8M, the value would be a loss of 1M. However, if the possibility of management action is included, there are three possible outcomes: a loss of 2M with probability 75% if the project is terminated after the rst phase; a loss of 8M with a probability of 5% if it fails at the second after success at the rst; or a prot of 27M with probability 20% if it goes to completion. The result is a more attractive value of 3.5M. This result is often called the Expected Commercial Value, ECV (2). The probability distribution for a project like this is shown schematically in Figure 1. For the sake of realism we have given each outcome a spread of values rather than the single ones used in the discussion above. Figure 1 shows how misleading the mean, or ECV, can be as a measure of value. It is easy to assume that the mean is the value that will occur if all goes well. In fact, it is not necessarily a particularly probable result at all; indeed,
Figure 1.Probability distribution for the value of a simple 3-phase project (schematic, not to scale). MarchApril 2010
45
for projects such as that illustrated in Figure 1, it may actually be one of the least likely of all the outcomes. In addition, of course the single gure completely fails to take account of the range of outcomes that may occur. Although the mean is an unhelpful measure for single projects, it is appropriate for valuing large portfolios. If the results of a portfolio of risky projects are added together, the uncertainties tend to cancel out so that the total approaches more and more closely to the sum of their means as the number of projects in the portfolio increases. The problem is that the numbers must be rather large to make a difference (9): typically between 10 and 100 must come to fruition in the same period to make a serious reduction in the relative spread of outcomes. Moreover, this reduction applies only if all the projects are uncorrelated, which is rather unlikely if they are conducted in the same organization. It therefore seems prudent to assume that small portfolios are the rule rather than the exception in real businesses. The fundamental issue in valuing and comparing small portfolios of projects is the necessity to take account not just of average values but of the full range of possible outcomes for each. Constructing Value Distributions: Decision Trees A project with a number of phases of work can be represented using a decision tree, as in Figure 2. Each phase (represented by a square box in these illustrations) has associated costs and incomesor, more generally, probability distributions of costs and incomes. Decision points (represented by circles) are included, with probabilities assigned to the possible outcomes. A properly constructed decision tree-type model can include all that is known, or surmised, about the project
and can provide as complete a representation as is desired. In principle, all the costs associated with uncertainty are includedfor example, if spare factory capacity has to be made available in one phase against the possibility of being required after the next decision point, these costs would be included on the appropriate branch of the tree. The probabilities assigned in a decision tree cannot generally be estimated (as one might for a physical process) by taking samples from a number of equivalent previous events, because there are none; each project is unique, although the history of similar projects may be useful as a guide. These are not probabilities in the frequency sense but are subjective assessments measuring the condence, or the degree of belief, of the estimator(s) in particular outcomes (10,11). We shall therefore use the term Condence Distribution for a value range calculated from a model such a decision tree. Simple decision trees, such as the one implied by Table 1, can be valued using a calculator. More complex ones that include probability distributions typically require Monte Carlo methods (12). The technique involves exploring the network using a random number generator to determine the costs and outcomes at each stage. By doing a large number of trials and combining the results, the method explores all possible outcomes and plots a frequency (condence) distribution. Typical results and analysis are given later in the practical example. Data in Decision Trees A decision tree is only as good as the data that go into it. Research into estimating skills has shown that the human ability to assess probabilities is not impressive.
Figure 2.Example of a simple decision tree for a project showing phase of activity (incurring costs and/or income) and decision points.
46
Kahneman, Slovic and Tversky point out that people appear to make these judgements using a few simple rules, or heuristics, which are often adequate but are certainly not based on formal statistical reasoning (13). One such is the representativeness heuristic: people tend to judge whether something belongs to a class simply by the extent to which it resembles members of that class. This leads to a number of errors of which the most relevant here is a persistent tendency to expect a small sample to be closely representative of the population it comes from. This causes a propensity to over-estimate the signicance of a few instancesand hence of ones own personal experience. Tversky and Kahneman have christened this tendency the law of small numbers (14). Another estimating heuristic is availability: people assess the frequency of occurrences by the ease with which examples of it come to mind. The emotional force of particular experiences makes them easier to recall; so in the project context we tend to over-estimate the frequency of high-impact triumphs and disasters. A third relevant heuristic is anchoring: the tendency to be unduly inuenced by the most recently acquired information, either from ones own experiences or the views of others. The biases that may come from these heuristics are widespread and apparently innate. They certainly cast doubt on the accuracy that can be expected of predictions about events in innovation projects. For anyone who would wish to view man as a reasonable intuitive statistician, such results are discouraging (13). Individual biases can be mitigated by using groups to make the estimates. However, the estimates of groups may be distorted by the inuence of powerful or charismatic individuals, or by groupthinkthe propensity of tight-knit groups to over-value consensus and so mistake agreement for truth (15). Special management processes are required to minimize these effects. The Delphi process (16) and the Risk Diagnosing Methodology reported by Philips and Unilever (17) are examples. The inherent uncertainty in the information should be reected in the way the data are collected and presented; and in particular it is important not to use more subtlety in the analysis than the accuracy of the data can justify. In applying decision tree methods with a number of companies we have found three practices are helpful: 1. Avoid complex decision trees. The facilitator should constantly question whether additional branches to the tree reect real choices that will be made rather than merely expressing uncertainty. Simplify wherever possible. 2. Insist on ranges rather than single values for costs and incomes in the decision tree, unless for very good reaMarchApril 2010
sons, but use only the most simple distributions to represent the uncertainty. We offer managers a choice of rectangular or triangular distributions only (although the triangle can be asymmetric). 3. Quantise the condence estimates. This technique is used to restrict the values that can be used for the condence estimates at each node, reecting the fact that such estimates are inevitably uncertain. Thus, when managers estimate the probabilities of the branches from a 2-way node, we ask them to allocate, or wager, a small number of tokens according to the condence they feel that the project will go one way or the other. Using 6 tokens constrains their views to a 5-point scale (51, 42, 33, 24, 15); 8 tokens constrains it to a 7-point scale. We nd this approach has three advantages: it avoids pointless haggling over unreal distinctions; it restricts the ability of participants to nudge the result in the direction they prefer; and it reminds managers that the point at issue is the condence they feel in the outcome from the node, not their estimate of a spurious probability. Using an even number of tokens seems to be preferable in practice because it gives the option of a balanced 33 or 44 split whereas an odd number forces managers to show a preference at every stage. 12 tokens works well for a 3-way node. Interpreting Probability Distributions of Value Validity and value of the detail Condence distributions of value are difcult to interpret. The eye is drawn to the peaks and troughs which may be less signicant than they appear. For example, the outcome may be more likely to lie within a broad, low peak than in a high, narrow one if the former has more area, although the latter is more prominent. It can help to draw the distributions in cumulative form as in Figure 3, because this allows a direct reading of how much condence one may have that the value will fall above or below a certain limit. The details of a condence distribution must be treated with caution for two reasons: the rst is the intrinsic unreliability of the data, as discussed above; the second and crucial issue is that a project is a single event, with a single (if currently unknown) result. The result of any single trial may fall anywhere where the value distribution is non-zero. The uncomfortable fact is that the detailed shape of the distribution has no predictive value. It is meaningless to say that one outcome is more or less likely than another when there will be only one event. The question is therefore what useful prediction can be made about this one project? The principal value of the condence distribution is, therefore, to establish the upper and lower bounds within
47
which the actual outcome may be reasonably expected to lie. This approach accords well with everyday experience where, facing an uncertain situation, ones instinct is rst to understand the worst-case scenario (showing whether the risk is acceptable), and thenless urgentlythe best case scenario, which shows whether the investment may be worthwhile. The upper and lower bounds are not necessarily the calculated extremities of the value distribution. In practice the management of uncertainty always involves some rejection of extreme possibilities, so it is appropriate to reject the extremes where condence is lowperhaps the upper and lower 5%, giving a 90% condence level for the remaining distribution. Moreover, the possibility of management intervention must be considered; for example, a project might in practice be terminated if it appeared likely to make more than a certain loss. For all these reasons, managers may feel justied in truncating the distribution in the manner shown in Figure 3, so that what remains represents the range of reasonable expectations for the project. We call the extremes of the remaining distribution the Highest Likely Value (HLV) and Lowest Likely Value (LLV). Condence distributions of value The most complete estimate of the value of a project is a condence distribution of value found by analysis of a decision tree or by some other modeling process. We emphasize that this is a forecast of the future and so is subject to many uncertainties that can be resolved only by the passage of time. But it is the best there is. Such distributions tend to be complex (as shown later by our practical case, for example) and this would appear to make comparisons between them very difcult. The
Figure 4.Two projects with the same mean outcome, but different distributions.
48
cant compared with the total wealth of the individual or organization. Companies seldom make such large bets on R&D or technology projects so it would appear that l in Dembo and Freemans model should be close to 1 in most commercial cases. Assessing the balance of risk and reward for a project is therefore broadly a matter of balancing the Upside and Downside possibilities with reference to a threshold.
Figure 5.Illustrating the upsides and downsides of two projects with respect to a benchmark. (The distribution has been truncated, as recommended above).
Measures of Upside and Downside The simplest measures of Upside and Downside are the distances between the benchmark and the HLV or LLV respectively. A more conservative view would be to take the integrals under the probability distribution curve above and below the thresholdthe Upside and Downside parameters proposed by Dembo and Freeman. The HLV and LLV are particularly useful during the project management because they focus on the best and worst plausible outcomes, and these are the aspects that will need active attention and management as the project progresses. The LLV is crucial because it measures the money or resources that could be required to cover the worst plausible outcome. It represents, as it were, the money that must be put on the table to play the game. However, we nd that managers are often uncomfortable with using the Highest Likely Value to describe the possible benets of a project and prefer to use the more conservative Upside parameter, which gives a condence-weighted view. Table 2 summarizes these considerations. Project Selection within a Finite Cash Budget Gamblers probably give a much higher utility weighting to the life-transforming possibility of a big win than they
This is consistent with the accepted view that a rational gambler would not accept a balanced wager. The Upside and Downside are respectively the condence-weighted expectations for returns better and/or worse than the threshold (Figure 5). Dembo and Freeman also propose that in practice companies would require a better return on their risk than is implied by equation 2 so they introduce the Risk Aversion factor, l, to quantify the requirement for a better ratio between risk Downside and Upside. Value Upside l.Downside Eq. 3 This approach is very helpful in comparing projects, but it is open to criticism as a measure of value on two grounds. The rst is that it delivers a single point estimate which, as we have argued, is entirely misleading for single projects or small portfolios because it masks all the uncertainty that may be present. The second objection is that it is difcult to know what value should be assigned to l. The Perspective of Expected Utility Theory Expected Utility Theory attempts to determine the extent to which a gambler would or should place the same value on gains and losses. The theory proposes that people attach different utility to different levels of their total wealth. Wealth of zero is held to have zero utility but the extra utility of increments of wealth decreases steadily (Figure 6). This means that any gain would be valued less than an equivalent loss, so everyone would be to some extent risk-averse, the more so when the sums involved are signicant in comparison with their total wealth (19). Unfortunately, it has been shown that individuals can be inconsistent in assigning utilities, even in simple circumstances (20). If the utility curve has anything like the classic convex shape it will affect the relative weighting of upsides and downsides only if the investments involved are signiMarchApril 2010
49
Table 2.Comparison of Measures of Upside and Downside for Condence Distributions of Value. Parameter LLV Denition Worst plausible outcome. Lower extent of value distribution after truncation. Best plausible outcome. Upper extent of value distribution after truncation. Advantages Denes the money, or value, placed at risk in the worst case. A safe budget. A simple measure that may often be calculated without full simulation. A simple measure that may often be calculated without full simulation. Disadvantages May appear to be an extreme view. The outcome will probably not be as bad as this in practice. An optimistic view. In practice the result may well be worse. Managers may fear that this is seen as a target, not an upper limit. Optimistic, and arguably misleading, view. This value may well be exceeded in practice. Outcome may well be better than this. Opportunities may be missed if this is used as the basis for project planning.
HLV
Downside
Area (integral) of condence distribution of value below benchmark. Area (integral) of condence distribution of value above benchmark.
Appears to be more realistic as it takes account of all the information in the distribution. However, this has no predictive value for a single case. Comfortably conservative. Appears to be more realistic as it takes account of all the information in the distribution, although this has no predictive value for a single case.
Upside
do to their bet, which may be so small as to be of no account. If so, they are effectively giving a zero utility weighting to small losses but a sharply higher one to large gains. This would make gambling a more rational activity than it seems, because the ratio of utility between the stake and the payback would actually be more favorable than the sums of money would imply. Companies often have a xed budget that they are prepared to put at risk on innovative projectsand especially in early-stage technology. This risk budget is usually not regarded as a cost to be deducted from the project prot, but more like a xed overhead to be used to maximum efciency. This would imply that the utility weighting below the benchmark in Figure 5 would be very low if expenditure is within the budget, but rising rapidly if the budget were exceeded. This is shown by the dotted line in Figure 7. In these circumstances, the role of portfolio selection will be to make the best use of the risk budget by generating the maximum opportunity for benet. The ratios HLV/ LLV, Upside/Downside or Upside/LLV may be used as gures of merit in comparing projects. It is clear that there is some scope for choice here. Overall, the Upside/ LLV ratio appears preferable as it expresses a conservative view of the benets against the resources that must be made available to ensure that the project can proceed to conclusion. However, in managing the projects that are selected it is important to track both LLV and HLV. Note that we are using LLV to denote the positive amount by which the project would fall below the acceptability benchmark in the worst case. We assume that all projects under consideration have a non-zero value for the LLV.
Any project whose LLV meets the benchmark would automatically be chosen; if all projects exceed the acceptable benchmark, then we would need to raise the benchmark to distinguish between them. Interestingly, if two projects have value distributions with a common central value that is above zero, the narrower distribution will generally have the better ratio. Figure 8 illustrates this where HLV/LLV is used as the gure of merit. This gives substance to the intuitive feeling noted above that the less risky projects are generally the more desirable. Practical Example We illustrate the practical use of this approach with reference to an application in which a company wanted to determine the most effective way to introduce RFID technology. It was unclear whether it would be better to proceed via one or two pilot implementations to minimize the risk, or to go for a single implementation with no preliminaries. A group of managers rst constructed the three decision trees working as a team. They then made their own estimates of condence levels (using a 12-token voting method as all the nodes had 3 branches), and of cost and income data (using at or triangular distributions). Finally, the estimates were pooled and discussed to come to agreed values. This process took a few hours. The Monte Carlo analysis was then done ofine and the resulting value distributions presented to the team the next day for discussion and review. In this case it was agreed to truncate the distributions by simply removing the 5% tails at either end. Figures 914 show the decision trees and resulting valuecondence distributions.
50
Figure 8.Illustrating the gure of merit for three value distributions with the same center but different widths.
Table 3, page 54, compares the main parameters of the condence distributions for the three implementations. This shows that although the mean, or expectation value, is highest for the two-pilot implementation, the one-pilot case gives the best ratio of Upside to LLV, which, as argued above, is a more secure basis for choice. This is the implementation that the company chose to follow. Compiling a Small Portfolio A small portfolio making optimum use of the risk budget may now be chosen by the following procedure: 1. Construct a decision tree or similar model for each project.
MarchApril 2010
2. Calculate the condencevalue distributions. 3. Truncate each distribution as appropriate and calculate the LLV and the Upside value. 4. Select projects in the order of their Upside/LLV ratio and combine them into a portfolio. If the outcomes of the projects are 100% statistically correlated (an unlikely situation) this is done by simply adding the distributions. If the projects are uncorrelated, their distributions should be combined by the mathematical process of convolution (21). 5. As each new project is added, truncate the resulting distribution as appropriate and determine the LLV of the
51
Figure 10.Condence distribution of value derived from Figure 9 by Monte Carlo simulation.
Figure 11.Decision tree for RFID implementation project with one pilot phase.
Figure 12.Condence distribution of value derived from Figure 11 by Monte Carlo simulation.
portfolio. This process continues until the risk budget is used up or until the pool of sufciently attractive projects is exhausted (managers may set a minimum value for the ratio Upside/LLV, which plays somewhat the same role as Dembos l). It should be noted that a portfolio made up of many small projects will, as noted above, have a relatively narrower value distribution than one made up of a few larger ones. There may therefore be scope for some judicious trial and error in choosing the best mix of projects. This procedure is a solution to the problem of how to construct an optimal portfolio of risky projects. It is
equally applicable to small or large portfolios. Risk and reward are here seen not as separate considerations but simply as aspects of the range of possibilities open to each project. There is no need, or scope, for a subjective balancing of one against the other and so techniques such as riskreward diagrams are no longer appropriate; the issue is simply how to make best use of the available risk budget. Small Portfolios: The General Case The analysis may proceed exactly as before except that the condence distribution for each project is plotted in
52
terms of the ratio of return to investment, instead of cash. The company will have an expected minimum return on investment and this gure becomes the threshold against which the parameters are calculated. Any projects whose LLV is above the threshold are automatically included in the portfolio, and the rest are selected in the order of Upside/LLV until the investment budget is used up. In the happy circumstance that there are more projects with an LLV above threshold than the budget allows, the threshold can be raised until this is no longer so. In Conclusion Small portfolios differ from large ones in that the use of statistical measures of outcome for the individual projects in the small portfolio is invalid and can be highly misleading. In particular, a project cannot validly be described as having an expected value that is subject to risk; all that can be said is that its outcome will fall somewhere within a range of values. For a project, a decision tree with Monte Carlo analysis gives a formal way to calculate the condence distribution of outcomes. Some management judgment may be needed to determine the reasonable upper and lower condence limits. The detail of the probability distribution between the limits has little predictive value, and is in any case likely to be unreliable. A risky project may be regarded as offering a wager in which the possibility of an outcome below a required benchmark is wagered against the possibility of above-
A project cannot validly be described as having an expected value that is subject to risk.
benchmark performance. Several gures of merit are possible for assessing this balance and so for comparing one project with another. The preferred ratio is of the Upside to the Lowest Likely Value, as this represents the condence-weighted value for a successful outcome to the budget that must be made available to run the project. A portfolio can be compiled simply by selecting projects in this order until the available investment funds are used up. This approach makes it possible to choose small portfolios in a straightforward and logically defensible way without using invalid statistical measures. It also avoids the undened judgmental process that managers are required to employ when balancing the classic riskreturn matrix. However, it must be admitted that the shift of perspective is uncomfortable. It requires an effort of will to abandon the familiar ideas of mean, risk and
Figure 13.Decision tree for RFID implementation project with two pilot phases. MarchApril 2010
Figure 14.Condence distribution of value derived from Figure 13 by Monte Carlo simulation.
53
Table 3.Comparison of Three Strategies for Implementing RFID Technology Using Parameters of the Condence Distributions of Value and Their Ratios. Highest value (HLV) HLV/LLV Expected Upside / Downside Mean Expected Upside / LLV
probability and instead concentrate solely on understanding and comparing the range of possible outcomes of the projects concerned.
Reference and Notes 1. Mitchell, R., Romito, C. and Probert, D. 2005. Selecting and Valuing Small Portfolios of Projects. IEMC conference. St Johns Newfoundland. 2. Cooper, R.G., Edgett, S.J. and Kleinschmidt, E.J. 2001. Portfolio Management for New Products, 2nd Ed. Cambridge Mass.: Perseus Books. 3. Roussel, P.A., Saad, K.N. and Erickson, T.J. 1992. Third Generation R and D. Managing the Link to Corporate Strategy. Cambridge, Mass.: Arthur D. Little. 4. Davis, J., Fuseld, A., Scriven, E. and Tritle, G. 2001. Determining a projects probability of success. Research-Technology Management. vol. 44 (May-June), pp. 5157. 5. Tritle, G.L., Scriven, F.V. and Fuseld, A.R. 2000. Resolving Uncertainty in R&D Portfolios. Research-Technology Management vol. 43 no. 5, pp. 4755. 6. Amram, M. and Kulatilaka, N. 1999. Real Options: Managing Strategic Investment in an Uncertain World. Boston, Mass.: Harvard Business School Press. 7. Weston, J.F., Besley, S. and Brigham, E.F. 1006. Essentials of Managerial Finance. 11th ed. Fort Worth, TX: The Dryden Press. 8. Dixit, A.K. and Pindyck, R.S. 1994. Investment Under Uncertainty. Princeton, NJ: Princeton University Press. 9. A portfolio of n identical but uncorrelated projects has a mean that is n times the mean of one project and a standard deviation
(width) that is n times the standard deviation of one project. The width of the distribution increases as n increases but the width divided by the mean decreases as the square root of n. This means that the spread in the value of a portfolio of 10 projects relative to the mean will be only about 1/3 that of a single project; that of 25 projects will be 1/5. 10. de Finetti, B. 1990. Theory of probability: a critical introductory treatment. Chichester: Wiley. 11. Ramsey, F.P. 1926. Truth and Probability, in Foundations of Mathematics and other Essays, R.B. Braithwaite (ed.), Routledge & P. Kegan, pp. 156198. 12. Razgaitis, R. 2003. Dealmaking Using Real Options and MonteCarlo Analysis. Hoboken, New Jersey: John Wiley. 13. Kahneman, D., Slovic, P., and Tversky, A. 1982. Judgement under Uncertainty: Heuristics and Biases, Chapter 1. Cambridge: Cambridge University Press. 14. Tversky, A. and Kahneman, D. 1971. Belief in the Law of Small Numbers. Psychological Bulletin 2, pp. 105110. 15. Janis, I.L. 1972. Groupthink. Boston: Houghton-Mifin. 16. Makridakis, S., Wheelwright, S.C. and Hyndman, R.J. 1998. Forecasting: Methods and Applications. New York: John Wiley. 17. Keizer, J.A., Halman, J.I.M. and Song, M. 2002. From experience: applying the risk diagnosing methodology. Journal of Product Innovation Management Vol 19, pp. 213232. 18. Dembo, R. and Freeman, A. 1998. Seeing Tomorrow: Rewriting the Rules of Risk. New York: John Wiley. 19. Davis, J., Hands, W. and Maki, U. (eds). 1997. Handbook of Economic Methodology. London: Edward Elgar, pp. 342350. 20. An example is the Ellsberg paradox (see Anand, P., Foundations of Rational Choice under Risk. Oxford: Oxford University Press, 1995). 21. Ross, S. A rst course in probability, 7th ed. 2006. Upper Saddle River, NJ: Pearson Education, p. 280.
54
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.