Vous êtes sur la page 1sur 12

SPE 130141

Bid Optimization with Monte Carlo Simulation


John Schuyler, PetroSkills®

Copyright 2010, Society of Petroleum Engineers

This paper was prepared for presentation at the SPE Hydrocarbon Economics and Evaluation Symposium held in Dallas, Texas, USA, 8–9 March 2010.

This paper was selected for presentation by an SPE program committee following review of information contained in an abstract submitted by the author(s). Contents of the paper have not been reviewed
by the Society of Petroleum Engineers and are subject to correction by the author(s). The material does not necessarily reflect any position of the Society of Petroleum Engineers, its officers, or
members. Electronic reproduction, distribution, or storage of any part of this paper without the written consent of the Society of Petroleum Engineers is prohibited. Permission to reproduce in print is
restricted to an abstract of not more than 300 words; illustrations may not be copied. The abstract must contain conspicuous acknowledgment of SPE copyright.

Abstract
The optimizer’s curse phenomenon seen in portfolio optimization appears similar to the winner’s curse observed in competitive
bidding. A technique that calculates the effects of optimizer’s curse and sampling error bias is adapted for analyzing competitive
bidding. The demonstration example is an auction of single risky project, which could be an exploration block containing one
prospect. Bid optimization involves finding a function that expresses bid fraction as a function of estimates for 1) project expected
monetary value and 2) number of other bidders. A foundation principle for a symmetric auction is that each bidder must be in
equilibrium. The base model assumes that all auction competitors a) have matching judgments for the project parent population (in
three parameters), b) all competitors the same estimate/actual error functions (in the three judgment parameters), c) use the same
bid factor function, and 4) are risk neutral. Several correlations are embedded in the calculations. Incorporating specific company
cost and information advantages, sub-models representing the project appraisal process to determine commercial success, and
other such details is straightforward. The example assumes one company (us) has assessed chance of success, test cost, and
discovery value for a subject prospect. A brute-force Monte Carlo simulation approach solves Bayesian calculations for the auction
simulation. The calculations produce estimates of 1) probability of winning the auction and 2) expected monetary value versus bid
amount, thus determining the optimal bid.

Introduction
Most of the world’s asset (and service) acquisitions and sales are through auctions. An auction is a public (usually) and
competitive sale in which property, goods, services or other assets are sold.
In this discussion, we will assume the auction involves sale of a risky asset, such as exploration rights on a block containing
one prospect. We will assume a sealed-bid auction, with the highest bidder winning the sale and paying the bid amount. For
convenience, the discussion will assume our company will be competing against an unknown, though estimated, number of other
bidders (NOB).
For a participant in the auction process, the most important piece of information is the value of asset. Participants usually
appraise the asset value first, and then determine their appropriate bid fraction (BF):

Bid Amount
Bid Fraction = BF =
Value Estimate (1)

With much money often at stake, there is great opportunity to apply quantitative methods for optimizing BF, displacing what is
most-often determined by experience and intuition.
There is substantial literature about auction theory. This is a subset of game theory, which is the study of interactions among
presumably rational players. By far the most widely cited application of game theory is auction of radio spectrum slices (Milgrom,
2004). Spectrum licenses are becoming increasingly valuable as ever more bandwidth is needed for cellular phone and data
communication services. Governments seek game theory experts to design the auctions. Competing bidders wishing to secure
those licenses also employ or engage game theorists.
The spectrum auctions are designed to maximize good use of the asset and to maximize license revenues for the government.
These auctions usually involve multiple bidding rounds in a process that reveals pricing and other information to the bidders. In
2 SPE 130141

contrast, most other auctions are straightforward from among several types:
• Sealed-bid or open (English or open-outcry)
• Ascending or descending price (these apply only to open auctions)
• First or second price (in a second-price or Vickrey auction, the winner is the high bidder but pays the bid price of the
second-highest bidder)
This paper describes a sealed-bid, first price type auction simulation with risk-neutral competitors and seller.
Klemperer (2004) summarizes, the key “Revenue Equivalence Theorem which, subject to some reasonable-sounding
conditions, tells us that the seller can expect equal profits on average from all the standard (and many non-standard) types of
auctions, and that buyers are also indifferent among them all (p. 2).” If there were a simple type auction that provided superior
returns for sellers, then most sellers would use that type and buyers would shun auctions of that type. Game theorists make their
living dealing with unusual auction structures and contaminating conditions.
Excepting one what-if case described toward the end, all competitors are assumed equal. Each competitor is trying to
maximize his expected monetary value (EMV, expected value net present value (NPV)), and every bidder thinks his asset appraisal
is best. Bidders should be in equilibrium: No bidder can change his bid to improve his EMV.
We will ignore some real-world complexities including such features as:
• Behavioral effects because buyers and sellers will meet again
• Buyers and seller who may be risk-averse
• Collusion among buyers
• Different asset values for individual companies due to synergies with existing operations, advanced technology or other
cost or revenue advantage
• Reserve (minimum price acceptable to the seller) price, and whether or not this is revealed
• Differences in information quality (e.g., estimation precision) and other asymmetries.
While it is straightforward to include such feature, the complexity of the model likely will grow exponentially.
Summary details included in this paper’s model include:
• Describing the project asset in terms of three parameters:
o Chance of Success (CoS),
o Test Cost (Test, net present value cost to determine commercial success), and
o Discovery (Disc, net present value of success or discovery, after paying Test).
The expected monetary value equation before deducting the bid amount is simply:
(2)
• Describing the parent project population of which the subject asset is a member. In addition to the three components of
the EMV calculation, part of the parent population description is the Number of Other Bidders (NOB) that would be
attracted to an auction of the sample asset.
• Describing our and other companies’ random evaluation errors as error functions, judged or historical distributions of
estimate/actual ratios. The model includes these random evaluation errors for the four population parameters and assumes
that all competitors have the same information quality.
• Our specific expected value assessments for each of the four parameters for a particular project or asset.
This paper is not attempting to advance auction theory. It is offered merely to describe a calculation approach that may be useful.

Optimizer’s and Winner’s Curses


My past attempts at bidding simulation have proved illuminating yet frustrating. My colleague, Tim Nieman and I wrote a recent
paper (Schuyler and Nieman, 2008) about the optimizer’s curse observed in portfolio management. It seems that the optimizer’s
curse effect is similar to the long-recognized winner’s curse (Thaler, 1994).
The approach we used for modeling and adjusting-out the optimizer’s curse is applicable to modeling auctions. Calculations
with probability distributions are always difficult. Fortunately, Monte Carlo simulation (MCS) enables calculations which are
usually impossible with symbolic calculus. A drawback of MCS is that the calculations are “noisy.” Random sampling errors lead
to imprecise answers unless an extraordinary number of trials are performed. This is an unfortunagte feature for optimization
problems, such as the topic of this paper. Fortunately, computers are increasingly fast and inexpensive.
SPE 130141 3

Winner’s Curse
Bidders may, on average, be objective in their valuations of assets that they want to acquire at auction. Despite this, the winner
is usually the one with the highest, most-optimistic value assessment this time. Auction participants have observed that if you have
the winning bid:
• The good news is that you won the asset.
• The bad news is that you likely paid too much.
Corollary: The only way to be “successful”—win bids—in acquisitions is to overpay.
The winner’s curse is the tendency in competitive bidding for the bidder with the most optimistic value assessment to win.
This was described in a seminal (at least for the petroleum industry) paper by Capen, Clapp and Campbell (1971) about their
bidding work for Atlantic Refining Company (which later became ARCO). Their calculations show the systemic bias that helps
explain why companies tend to overpay for acquisitions.
Optimizer’s Curse
Smith and Winkler (2006) have shown a systemic bias in portfolio planning. The optimizers’s curse is the tendency for
portfolios to underperform expectations. With portfolios, project screening causes the bias. With auctions, it is the highest bid
winning the sale that causes the winner’s curse bias. I believe the optimizer’s curse and winner’s curse are the same phenomenon.
Most people assume that that disappointment in projects is most-often the result of optimism or other judgment bias. We want
and expect our subject matter experts to be unbiased (objective) with their assessments. In addition, biases should not be
introduced by the forecast or estimation calculations. Even if everyone and all calculations are unbiased on average, we will still
observe the optimizer’s curse. This is because of screening: The projects we choose to fund are the ones where we tend to be
optimistic.
Sampling Error Bias
Unbiased subject matter experts will make random measurement or judgment errors. One way to demonstrate objectivity (lack
of bias) is to record judgments and later compare them with actual outcomes. Fig. 1 is an example frequency histogram of the
estimate / actual ratios of historical assessments. The mean value of the frequency distribution should be about one if the
assessment process is objective. A smaller “width” of the distribution indicates greater precision in estimation.

Fig. 1⎯Estimate/Actual ratio histogram used to demonstrate objectivity in measurement or estimation.

In modeling projects and portfolios, I use an error function probability density function (p.d.f.) with a shape similar to Fig. 1 to
represent random evaluation errors. A sample estimate is obtained by multiplying a sample from the parameter’s population time
the error function for that parameter. This is a key calculation in my approach.
What we find in modeling this way—as in the real world—is that estimates that are higher than the population mean tend to
be optimistic, and those estimates lower than the population mean tend to be pessimistic. We are assuming that higher is better.
When naming this sampling error bias (SEB) we thought the attraction would be toward the population mean. Actually, the
attractor is at a point just slightly higher than the mean as shown in Fig. 2. The SEB represents much, if not most, of the overall
optimizer’s curse bias.
4 SPE 130141

Sampling Error Bias Correction


2.5

Estimate $billions
2.0

1.5

1.0

0.5

0.0
0.0 0.5 1.0 1.5 2.0

Prior Estimate $billions

SEB-Corrected
Original Estimate

HEES7_OC_Fig3.wmf 2-11-07 in BidFigs.ppt


Produced from OC_SingleProjectDemo.xls

Fig. 2⎯Correction for the sampling error bias (SEB) as a function of the original estimate values.

Auction Model
Two ideas are central to the auction model. First, the auction is efficient. The EMV for the winner must be zero. If it were positive,
then any bidder would do better to increase his bid amount slightly. Similarly, if the EMV for the winner were negative, than any
bidder could improve his EMV by reducing his bid amount.
Second, all bidders in this demonstration model are assumed identical. However, we can use alternate BF functions or error
functions if there is competitive intelligence. In this model we assume all bidders use the same function to determine BF. No one
bidder can change his bid amount in order to improve his EMV.

Structure
Fig. 3 shows a high-level flow diagram of the auction model. Appendix A is a script outline of the process. Appendix B contains
details of the assumptions and parameters.
SPE 130141 5

Fig. 3⎯Auction model flow diagram.

Inputs
The project type parent population is characterized by the three parameters identified earlier: CoS, Test and Disc. NOB (number
of other bidders) is also included in each parent sample record because this variable is correlated to Disc. An Excel add-in tools for
Monte Carlo simulation, either @RISK® (product of Palisade Corp.) or CrystalBall® (product of Oracle Corp.) is used to generate
a 100k recordset for samples of the four variables.
NOB is correlated to Disc.
Test is correlated to Disc by means of a simple probability tree model shown as Fig.4. There is a 0.2 chance that the first test
stage is successful. If not, the project is dropped. If the first stage is successful, there is a 0.5 chance that additional work (such as
drilling additional appraisal wells) is needed to confirm commerciality; if not, the project is declared successful. If the second stage
is needed, there is now a 0.6 chance of success; otherwise, the project fails.
Bidders’ companion error functions for each of the four variables are expressed as ratios of estimate/actual. Correlations are
represented between Test and Disc error functions and between Disc and NOB error functions (See Appendix B for details). The
spreadsheet MCS tool was again used to generate 100k sample recordsets.
A NoSales variable specifies the number of MCS trials in a computation run.
Appendix B provides further details about the assumptions.
6 SPE 130141

Need Additional Second Stage


Confirmation Success
Success

.6
Yes
Test Cost 2
First Stage
.5 .4
Success
Drop
Fails
.2 .5
Test Cost 1 Success
No
.8
Drop
Fails

Fig. 4⎯Probability tree to determine costs in a simple test appraisal model.

Process
This simplified process outlines follows the flow diagrammed in Fig. 3. If any variable names are unclear, the Nomenclature
section describes the naming structure components.
1. Generate the Data Cloud
a. Sample a population recordset and an error functions recordset (for us). Calculate our estimates of the four
parameters. Calculate OurEMVest. If OurEMVest > 0 then proceed; otherwise repeat this step.
b. Sample error function recordsets for NOBactual. Calculate BidderEMVest for each bidder.
2. Filter the Data Cloud to pass only those combined recordsets where OurEMVest ≅ OurProjEMVest and OurNOBest ≅
OurProjNOBest This is a brute-force way to do Bayesian calculations.
3. Calculate the BF for each bidder and his bid amount. Determine HighBidder and WinningBid. Record OurEMVnet,
reflecting whether or not we won.
This description omits several computational efficiencies in the program design. There are more details in the appendices. Most
calculation procedures and charts were done using MATLAB®, a product of The Mathworks, Inc.

Sample Results and Discussion


Optimizer’s Curse
Loading the population and error function recordsets produced the statistics in Table 1. These calculation just paired the 100k
population recordsets with the 100k error function recordsets and is not from an auction simulation.
Test and Disc are related (Pearson correlation coefficient = 0.75) through the Test cost model. NOB is correlated to Disc, and
it is this relationship that caused the .073 correlation to EMVactual.
Note the optimizer’s curse effect: Average EMVactual = $18 million. Average EMVestimate is $51.3 million for those 54% of
the estimates that are positive. Average EMVactual is only $30.8 million for when the EMVestimate > 0. Thus, the screening in
portfolio selection (requiring only that EMVest be positive, the “EMV decision rule”) causes $20.5 million of bias. Much of this is
explained by the sampling error bias (SEB) described earlier.

Table 1: Statistics from loading the 100k parent population sample recordsets and 100k error function recordsets.

Avg. EMVactual: 17.9833 includes negatives


Avg. EMVest: 18.0230 includes negatives
Avg. EMVest | EMVest > 0: 51.2817 of 100,000
No. records where EMVest > 0: 53,733
Avg. EMVactual | EMVest > 0: 30.7580 (reduction is OC)
Avg. NOBactual: 7.0000
Avg. NOBest: 6.9991

Population Actuals:
Correlation CoS : Test = 0.001
Correlation CoS : Disc = 0.000
Correlation Test : Disc = 0.748
Correlation NOB : EMV = 0.073

Error Functions:
Correlation CoS : Test = 0.002
Correlation CoS : Disc = 0.000
Correlation Test : Disc = 0.394
Correlation Disc : NOB = 0.389
SPE 130141 7

Determining Bid Fraction Formula


Over half the modeling effort was spent in determining the BF function. A linear equation in BidderEMVest and BidderNOBest
was found to reasonably fit the results:
(3)
Fig. 5a is a scatter diagram showing cells in a, b and c coefficients where Winner BidderEMVnet ≅ $0. I was surprised to see that a
family of points—in a plane—would satisfy the mean BidderEMVnet = 0 equilibrium condition. Fig. 5b shows the same figure
rotated to view these points from approximately the plane edge. This is easier to see when sitting at the PC and controlling the
chart orientation with a mouse. A set of good coefficients was obtained by fitting the simulation data to a plane using MS Excel’s
Solver (minimizing mean absolute distance). This equation is the result:
0.58 .0025 0.005 (4)
The approximate plane of the data is illustrated in Fig. 5c. The simulation data appear to have a lower-slope than the plane
generated from the regressed coefficients. This is likely because points were frequency-weighted used in the regression.

Fig. 5a (left)⎯3D Scatter plot of the a, b and c combinations Fig. 5b (right)⎯Same 3D Scatter rotated so as to view the
Where mean OurEMVnet ≅ $0. points from approximately an edge view of the plane.

Fig. 5c⎯ The BF parameters a, b and c approximately lie on a plane. Shown are simulation data (uneven blue, larger circles) among a
pattern grid plane of green, smaller circles.
8 SPE 130141

Checking BF determination. I anticipated that NOB would be important in determining the BF. This does not appear to be the
case, and I was somewhat reassured to find that a similar result was obtained by Zinn and Richter (1977). Nonetheless, this
behavior was unexpected and deserves more investigation. Fig. 6 shows OurEMVnet is optimal and approximately $0 with the BF
determined by the formula. It optimal was closer to $0 with data from the 1k simulation trials. However, the chart represents 10k
trials, and the difference appears due to sampling error. The model assumes all bidders are using the same equation and will insert
their respective estimates for ProjEMVest and ProjNOBest.

Specific Project Estimates


Fig. 7 shows hour OurEMVnet changes with different OurProjEMVest’s. Interestingly, if the model is believed, it appears that our
only chance of a positive OurEMVnet is for very high-value projects.

0.3
Our EMV Net of Bid Paid $million

0.2

Probability of Winning Bid


0.1

0
0.1 OurEMVnet
0.2 P(win)

0.3 Poly. (OurEMVnet)
0.4
0.5
0.8 0.9 1 1.1 1.2

Deviation from Optimal Bid Fraction

Fig. 6⎯Validating the BF function formula. This chart shows that the Eq. 5 is nearly optimal.
Our Probability of Winning the Sale

5 0.6
Our EMV Net of Bid Paid $million

4
0.5
3
0.4
2
OurEMVnet
1 0.3
P(win)
0
0.2 Poly. (OurEMVnet)
‐1
Poly. (P(win))
0.1
‐2

‐3 0
0 50 100 150

Our Project EMV Estimate


Fig. 7⎯Changes in OurEMVnet and P(win) vs. OurProjEMVest.
SPE 130141 9

Better Information Case


What if we have better information than our bidding competitors? Another case was run for OurEMVest = $30 million and with
the error function deviations from 1 halved (e.g., 1.2 becomes 1.1). Then OurEMVnet improves to -$0.064 from -$0.875 million
even though P(win) drops to 0.019 from 0.059.

Summary
This paper describes an asset appraisal and auction modeling approach that some readers may find useful. SEB is an important
phenomenon, and this evaluation bias can be adjusted-out characterizing the population from which an identified project has been
drawn and applying the bidder’s own error functions.
No attempt has been made here to extend auction theory. The author merely hopes that some readers will find the calculation
approach interesting and perhaps suitable for application. MCS provides an accessible modeling means to recognize and calculate
with uncertainties. Specific real-world circumstances can be incorporated into the auction model in straightforward ways.
It is very difficult to make money at auctions. Beware the winner’s curse. If the auction is efficient and all bidders are identical
except for individual parameter assessments and random evaluation errors, then WinnerEMVnet ≅ $0.
Special private information or better parameter assessment precision will provide a big competitive advantage and, in some
cases, the opportunity to make acquisitions with positive OurEMVnet’s.

Nomenclature
Parameters:
CoS = chance of success
Test = NPV (after-tax) of cost to test and confirm commerciality
Disc = NPV (after-tax) of revenues net of operating and developing expenditures for success
NOB = number of other bidders
and the calculation result
EMV = expected monetary value = expected value NPV
are kernels in named variable with prefixes:
Our = us or our company
Other = another bidder
Bidder = any bidder (which includes us)
Winner = winning bidder
EA_ = estimate / actual error function
Proj = project, referring to the specific asset or project being auctioned
and suffixes
actual = value from the project parent population*
est = estimate

Other abbreviations and symbols:


E/A = estimate / actual ratio, characterizing an evaluation error function
p.d.f. = probability density function (a.k.a., probability distribution)
ρ Sp = Spearman rank correlation
MCS = Monte Carlo simulation
NPV = net present value of a cash flow stream
SEB = sampling error bias
µ = mean of a p.d.f.
σ = standard deviation of a p.d.f.

* The actual suffix is a misnomer. A project outcome will be binary, either success or failure. Actual
used here refers to the risked value in the parent population parameters. The auction did not simulate
the binary project events to improve speed to convergence. The mean EMV is unaffected.
10 SPE 130141

References

Capen, E.C., Clapp, R.V., and Campbell, W.M. 1971. “Competitive Bidding in High-Risk Situations,” J. Pet Tech 23, 641-653.
Klemperer, Paul. 2004. Auctions: Theory and Practice, Princeton U. Press.
Milgrom, Paul R. 2004. Putting Auction Theory to Work, Cambridge U. Press.
Schuyler, John R. and Nieman, Timothy N. 2008. “Optimizer’s Curse: Removing the Effect of this Bias in Portfolio Planning,”
SPE Proj Fac & Const, Mar. 2008.
Smith, James E., Winkler, Robert L. 2006. “The Optimizer’s Curse: Skepticism and Postdecision Surprise in Decision Analysis,”
Management Science (March) 52, No. 3, 311-321
Thaler, Richard. H. 1994. Winner's Curse: Paradoxes and Anomalies of Economic Life, Princeton U. Press.
Zinn, C.D. and Richter, J.P. 1978. “BIDSIM–A Simulation Model for Investigating Group Optimal Bidding Strategies for Oil and
Gas Leases,” SPE 6577.
SPE 130141 11

APPENDIX A⎯SCRIPT LOGIC FOR THE AUCTION SIMULATION


Loop for Sales = 1 to NoSales
Loop until a satisfactory project and, if specified, a close OurProjEMVest
match (for us)
Sample a record containing CoSactual, TestActual, DiscActual
and NOBactual from the population.
Calculate EMVactual
If OurProjEMVest > 0 then
Sample a record for our error function for CoS,Test,Disc and NOB.
Calculate OurEMVest
If OurEMVest = 0 then
If EMVactual > -5 then exit loop
Else
Determine OurEMVest
Sample a record for BidderEA’s
Calculate BidderEMVest and BidderNOBest
If close match to OurProjEMVest then exit loop
End
Endif
Endif
Endloop to sample project

Loop for NOB = 1 to 1+NOBactual


If OurProjEMVest > 1
Use the BidderEA’s from prior loop for Bidder #1 (Us)
Else
Determine BidderEMVest
Sample a record for BidderEA’s
Calculate BidderEMVest and BidderNOBest
End
Endif
Endloop for NOB

Simulate the Auction


Loop for Bidder = 1 to 1+NOBactual
Calculate BF from BidderEMVest and BidderNOBest
Calculate Bid = BF x BidderEMVest
Store HighBidder’s values
Endloop

Write our outcome


If HighBidder = 1
OurEMVnet = 0
Else
OurEMVnet = -Test + CoS * Disc – WinningBid
Endif
End
End
Endloop for Sales
12 SPE 130141

APPENDIX B⎯DETAILED ASSUMPTIONS


Parent Population
CoS = beta distribution with α = 2, β = 10 (µ = 0.16)
Test = from the model shown as Fig. 4. µ = $30 million. Test Cost 1 = Test Cost 2 = 1/11 × Disc × noise
where noise = 1 + normal distribution with µ = 0, σ = 0.2, truncated at ± 0.3
Disc = lognormal distribution with µ = 301.874, σ = 115.2, truncated at 767.15 (µ = $300 million)
NOB = Poisson distribution with µ = 7, correlated to Disc with ρ S = 0.4
Error (estimate / actual) Functions
EA_CoS = beta distribution with α = 2, β = 8 ranging 0-5 (µ = 1)
EA_Test = beta distribution with α = 2, β = 8 ranging 0-5 (µ = 1)
EA_Disc = beta distribution with α = 2, β = 8 ranging 0-5 (µ = 1) correlated to EA_Test with ρ Sp = 0.4
EA_NOB = beta distribution with α = 2, β = 8 ranging 0-5 (µ = 1) correlated to EA_Disc with ρ Sp = 0.4

100k recordsets were generated for the four parent population parameters, and another 100k recordsets were generated for the error
four functions. To help speed convergence:
• Latin hypercube sampling was used, with 1000 layers
• Slight deviations from the true mean values were adjusted out.
• Beta distributions, with definite bounds, were used instead of lognormal distributions.
By far the best way to represent correlations is to model the relationship among variables. An example is the Test cost model,
which was illustrated in Fig. 4. Elsewhere, Spearman rank correlations were used between selected variables—just to incorporate
plausible relationships.
Here are some example distribution figures:
• Fig. B−1 illustrates the result of the Test model.
• Fig. B−2 shows the Poisson distribution for NOB.
• Fig. B−3 illustrates an error function. A sample population parameter times a sample from an error function gives a
sample estimate.
• Fig. B−4 shows a scatter plot of Test and Disc error functions, showing the correlation.

Fig. B−1. Distribution of Test cost, $million. Fig. B−1. Distribution of NOB.

Fig. B3⎯Example error function distribution. Fig. B4⎯Scatterplot showing the correlation between
error functions for NOB and Disc.

Vous aimerez peut-être aussi