Académique Documents
Professionnel Documents
Culture Documents
Part II: Estimation Gnter J. Hitsch The University of Chicago Booth School of Business
2013
1 / 83
Overview
The Estimation Problem; Structural vs Reduced Form of the Model Identication Nested Fixed Point (NFP) Estimators Unobserved State Variables Example: The Bayesian Learning Model Model Estimation in the Presence of Unobserved State Variables Unobserved Heterogeneity Postscript: Sequential vs. Myopic Learning The MPEC Approach to ML Estimation Example: Estimation of the Durable Goods Adoption Problem Using NFP and MPEC Approaches Other Topics References
2 / 83
Overview
The Estimation Problem; Structural vs Reduced Form of the Model Identication Nested Fixed Point (NFP) Estimators Unobserved State Variables Example: The Bayesian Learning Model Model Estimation in the Presence of Unobserved State Variables Unobserved Heterogeneity Postscript: Sequential vs. Myopic Learning The MPEC Approach to ML Estimation Example: Estimation of the Durable Goods Adoption Problem Using NFP and MPEC Approaches Other Topics References
3 / 83
Data
I I I I
We observe the choice behavior of N individuals For each individual i, dene a vector of states and choices for i periods t = 0, . . . , Ti : Qi = (xit , ait )T i=1 The full data vector is Q = (Q1 , . . . , QN ) We will initially assume that all individuals are identical and that the components of the state x are observed to us
I
We will discuss the case of (permanent) heterogeneity and unobserved state variables later
4 / 83
Data-generating process
We assume that the choice data are generated based on the choice-specic value functions Z vj (x) = uj (x) + w(x0 )f (x0 |x, j ) dx0 We observe action ait conditional on the state xit if and only if vk (xit ) + kit vj (xit ) + jit for all j A, j 6= k
The CCP k (xit ) is the probability that the inequality above holds, given the distribution of the latent utility components g ()
5 / 83
Using the concept of choice-specic value functions, the predictions of a dynamic discrete choice model can be expressed in the same manner as the predictions of a static discrete choice model Therefore, it appears that we can simply estimate each vj (x) by approximating it using a exible functional form , e.g. a polynomial or a linear combination of basis functions: v j ( x ) j ( x ; )
In a static discrete choice model, we typically start directly with a parametric specication of each choice-specic utility function, uj ( x; ) Unlike uj (x), vj (x) is not a structural object, but the solution of a dynamic decision process that depends on the model primitives, uj (x), f (x0 |x, a), , and g ().
Why is this important? Because it aects the questions that the estimated model can answer
6 / 83
The random utility components are Type I Extreme Value, hence the discrete values vj (x) can be estimated just as in a standard multinomial logit model Suppose we estimate vj (x) based on data generated from the high/low promotion process discussed in the example in Part I We want to evaluate a policy where the price is permanently set at the low price level Will knowledge of vj (x) allow us to answer this question?
I I I
7 / 83
Small pack size 1 0.9 0.8 Purchase probability Purchase probability 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 5 10 15 Inventory units 20 Promoted price Regular price 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 5
10 15 Inventory units
20
8 / 83
Small pack size 1 0.9 0.8 Purchase probability Purchase probability 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 5 10 15 Inventory units 20 Promoted price 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 5
10 15 Inventory units
20
9 / 83
Based on the data generated under the price promotion process, the sales volume ratio between promoted and regular price periods is 3.782/0.358 = 10.6 However, when we permanently lower the price the sales ratio is only 0.991/0.358 = 2.8
10 / 83
Evaluation of a policy where the price is permanently set at the low price level:
I
I I
The same problem does not arise in a static discrete choice model (if the static discrete choice model accurately describes consumer behavior)
We are not just changing a component of x (the price), but also the price expectations, f (x0 |x, a) Hence, vj (x) will also change to v j (x) Unless consumer choices under the alternative price process are observed in our data, we cannot estimate v j (x) Instead, we must predict v j (x) using knowledge of the model primitives, uj (x), f (x0 |x, a), , and g ()
11 / 83
(uj (x), f (x0 |x, a), , g ()) is the structural form of the dynamic discrete choice model (vj (x), g ()) is the reduced form of the model The reduced form describes the joint distribution of the data, but typically cannot predict the causal eect of a marketing policy intervention Policy predictions are therefore not only dependent on the statistical properties of the model and model parameters, but also on the behavioral assumptions we make about how decisions are made Good background reading on structural estimation and the concept of inferring causality from data that are non-experimentally generated: Reiss and Wolak (2007)
12 / 83
Overview
The Estimation Problem; Structural vs Reduced Form of the Model Identication Nested Fixed Point (NFP) Estimators Unobserved State Variables Example: The Bayesian Learning Model Model Estimation in the Presence of Unobserved State Variables Unobserved Heterogeneity Postscript: Sequential vs. Myopic Learning The MPEC Approach to ML Estimation Example: Estimation of the Durable Goods Adoption Problem Using NFP and MPEC Approaches Other Topics References
13 / 83
Identication
I I
All behavioral implications of the model are given by the collection of CCPs {j (x) : j A, x X} Suppose we have innitely many data points, so that we can observe the CCPs {j (x)}
I
For example, if X is nite we can estimate j (x) as the frequency of observing j conditional on x
I I
We assume we know the distribution of the random utility components, g () Could we then uniquely infer the model primitives describing behavior from the data: uj (x), f (x0 |x, j ), ?
I.e., is the structural form of the model identied (in a non-parametric sense)?
14 / 83
Identication
I
Hotz and Miller (1993): If has a density with respect to the Lebesgue measure on RK +1 and is strictly positive, then we can invert the observed CCPs to infer the choice-specic value function dierences:
1 v j ( x) v 0 ( x) = j ( (x))
for all j A
If j is Type I Extreme Value distributed, this inversion has a closed form: vj (x) v0 (x) = log(j (x)) log(0 (x)) We see that = 0, u0 (x) 0, and uj (x) log(j (x)) log(0 (x)) completely rationalize the data!
15 / 83
Identication
Theorem
Suppose we know the distribution of the random utility components, g (), the consumers beliefs about the evolution of the state vector, f (x0 |x, j ), and the discount factor . Assume that u0 (x) 0. Let the CCPs, j (x), be given for all x and j A. Then: (i) We can infer the unique choice-specic value functions, vj (x), consistent with the consumer decision model. (ii) The utilities uj (x) are identied for all states x and choices j .
16 / 83
(Non)identication proof
I
for all j A
The expected value function can be expressed as a function of the data and the reference alternative: Z w(x) = max{vk (x) + k }g ()d k A Z = max{v k (x) + k }g ()d + v0 (x)
k A
17 / 83
(Non)identication proof
I
Choice-specic value of reference alternative: Z v0 (x) = u0 (x) + w(x0 )f (x0 |x, 0)dx0 Z = max{v k (x0 ) + k }g ()f (x0 |x, 0)ddx0 k A Z + v0 (x0 )f (x0 |x, 0)dx0
I
(Non)identication
I I
The proof of the proposition shows how to calculate the utilities, uj (x), from the data and knowledge of and f (x0 |x, j ) Proof shows that if 0 6= or f 0 (x0 |x, j ) 6= f (x0 |x, j ) u0 j (x) 6= uj (x) in general Implications
I
If either the discount factor or the consumers belief about the state evolution is unknown, the utility function is not identied I.e., the model primitives uj (x), , f (x0 |x, j ) are non-parametrically unidentied
19 / 83
In practice: Researchers assume a given discount factor calibrated from some overall interest rate r, such that = 1/(1 + r) u0 (ct ) = Et [(1 + r)u0 (ct+1 )]
Typically, discount factor corresponding to 5%-10% interest rate is used
Assume rational expectations: the consumers subjective belief f (x0 |x, j ) coincides with the actual transition process of xt
I
20 / 83
I I I
A folk theorem: is identied if there are states that do not aect the current utility but the transition probability of x More formally: Suppose there are two states x1 and x2 such that uj (x1 ) = uj (x2 ) but f (x0 |x1 , j ) 6= f (x0 |x2 , j )
Intuition: Variation in x does not change the current utility but the future expected value, and thus is identied: Z vj (x) = uj (x) + w(x0 )f (x0 |x, j ) dx0
21 / 83
The folk theorem above is usually attributed to Magnac and Thesmar (2002), but the actual statement provided in their paper is more complicated Dene the current value function, the expected dierence between (i) choosing action j today, action 0 tomorrow, and then behaving optimally afterwards, and (ii) choosing action 0 today and tomorrow and behaving optimally afterwards Z 0 0 0 Uj (x) uj (x) + v0 (x )p(x |x, j )dx Z 0 0 0 u0 (x) + v0 (x )p(x |x, 0)dx Also, dene V j ( x) Z (w(x0 ) v0 (x0 )) p(x0 |x, j )dx0
22 / 83
Theorem
Suppose there are states x1 and x2 , x1 6= x2 , such that Uj (x1 ) = Uj (x2 ) for some action j. Furthermore, suppose that (Vj (x1 ) V0 (x1 )) (Vj (x2 ) + V0 (x2 )) 6= 0. Then the discount factor is identied.
I
Note: The assumptions of the theorem require knowledge of the solution of the decision process and are thus dicult to verify
23 / 83
I I I
Is the folk theorem true? Although widely credited to Magnac and Thesmar (2002), I believe this attribution is false However, a recent paper by Fang and Wang (2013), Estimating Dynamic Discrete Choice Models with Hyperbolic Discounting, with an Application to Mammography Decisions, seems to prove the claim in the folk theorem
24 / 83
Yao et al. (2012) Observe consumer choice data across two scenarios
I I
Static: Current choice does not aect future payos Dynamic Cell phone customers are initially on a linear usage plan, and are then switched to a three-part tari
I
Examples:
I
Under the three-part tari current cell phone usage aects future per-minute rate
Any nite horizon problem No proof provided for discrete choices, but (my guess) statement is true more generally
25 / 83
Dub, Hitsch, and Jindal (2013) Conjoint design to infer product adoption choices
I I
Present subjects with forecasts of future states (e.g. prices) Collect stated choice data on adoption timing Subjects take the forecast of future states as given
Identication assumption:
I
I I
Allows identication of discount factor , or more generally a discount function (t) Intuition
I
Treatments: Manipulations of states that change current period utilities by the same amount in period t = 0 and t > 0 Relative eect on choice probabilities and hence choice-specic value dierences at t > 0 versus t = 0 identies (t)
26 / 83
Overview
The Estimation Problem; Structural vs Reduced Form of the Model Identication Nested Fixed Point (NFP) Estimators Unobserved State Variables Example: The Bayesian Learning Model Model Estimation in the Presence of Unobserved State Variables Unobserved Heterogeneity Postscript: Sequential vs. Myopic Learning The MPEC Approach to ML Estimation Example: Estimation of the Durable Goods Adoption Problem Using NFP and MPEC Approaches Other Topics References
27 / 83
Goal of estimation
I I
We would like to recover the structural form of the dynamic discrete choice model, (uj (x), f (x0 |x, a), , g ()) We assume:
I I
g () is known The decision makers have rational expectations, and thus f (x0 |x, a) is the true transition density of the data Typically we also assume that is known
I I
Assume that the utility functions and transition densities are parametric functions indexed by : uj (x; ) and f (x0 |x, a; ) Goal: Develop an estimator for
28 / 83
Likelihood function
I
t=0
...
li ( Q i ; )
i=1 I
29 / 83
N F P is a nested xed point estimator We employ a maximization algorithm that searches over possible values of
I
Given , we rst solve for w(x; ) as the xed point of the integrated Bellman equation Given w(x; ), we calculate the choice-specic value functions and then the CCPs j (x; ) Allows us to assemble l(Q; )
I I
The solution of the xed point w(x; ) is nested in the maximization algorithm This estimator is computationally intensive, as we need to solve for the expected value function at each !
30 / 83
Separate = (u , f ) into components that aect the utility functions, uj (x; u ), and the transition densities f (x0 |x, a; f ) Note the log-likelihood contribution of individual i: log(li (Qi ; )) =
Ti X t=1 Ti X t=2
31 / 83
Overview
The Estimation Problem; Structural vs Reduced Form of the Model Identication Nested Fixed Point (NFP) Estimators Unobserved State Variables Example: The Bayesian Learning Model Model Estimation in the Presence of Unobserved State Variables Unobserved Heterogeneity Postscript: Sequential vs. Myopic Learning The MPEC Approach to ML Estimation Example: Estimation of the Durable Goods Adoption Problem Using NFP and MPEC Approaches Other Topics References
32 / 83
Following Erdem and Keane (1996) Product (or service) needs to be consumed or used to fully ascertain its utility Examples
I I
Product with unknown avor or texture Pharmaceutical drug with unknown match value, e.g. eectiveness or side eects
Inertia = state dependence in a purely statistical sense If true, has implications for pricing and other marketing actions
Hitsch (2006)
33 / 83
Learning about an unknown parameter vector RN Information acquisition: Sampling (choosing) j A yields a signal j fj (|) Knowledge: Prior t ()
I
34 / 83
I I
Notation: t t ()
I
State vector xt = (t , zt )
Ignore for now that t is innite-dimensional in general
u j ( zt , , ) f j ( | ) t ( ) d d
35 / 83
We assume the decision maker is able to anticipate how her knowledge evolves, conditional on the potential information that she may receive in this period Allows to dene a corresponding Markov transition probability f (t+1 |t , at )
36 / 83
Can be generalized with inverse-Wishart prior on , but rarely (never?) used in extant consumer learning literature
I I I
1 T 1 n = ( X ) 1 0 +X 1 T 1 1 T 1 n = ( X ) 1 ( y) 0 +X 0 0 + X
37 / 83
Consumers make purchase (= consumption) decisions over time, t = 0, 1, . . . j is the mean attribute level (quality) of product j
I I
2 jt N (0, )
Utility:
I
38 / 83
Information sources
I I
Assume for notational simplicity that there is only one product with unknown attribute level, and hence we can drop the j index Learning from consumption: t = + t ,
2 t N (0, )
I I
Let t1 , t2 , . . . denote the time periods when a consumer receives a consumption signal Let Ht = {tk : tk < t} be the time periods prior to t when a consumption signal was received, and Nt = |Ht | be the corresponding number of signals Mean consumption signal prior to t : X t = 1 Nt
Ht
39 / 83
Prior
I 2 , 2 ) Prior in the rst decision period: 0 () = N (0 , 0 ) = N ( 0
I
Refer back to the slide on the normal linear regression model with conjugate priors, and dene: = X = (1, . . . , 1)
0
(vector of length Nt )
Elements in correspond to the consumption and advertising signals in Ht (order does not matter)
40 / 83
Posterior
I
Results on normal linear regression model show that prior t = posterior given the signals for < t is also normal:
2 t ( ) = N ( t , t ) 1 1 Nt 2 t = n = 2 + 2 0 1 1 Nt 1 Nt t = n = 2 + 2 2 0 + 2 t 0 0
I
Remember: Inverse of covariance matrix also called the precision matrix Precisions add up in the normal linear regression model with conjugate priors
Shows that we can equate the prior in period t with the mean and variance of a normal distribution:
2 t = (t , t )
41 / 83
Transition of prior
I 2 Conditional on the current prior t = (t , t ), and only in periods t when the consumer receives a consumption signal t : 2 t +1
1 1 + 2 2 t 1 1 + 2 2 t
t+1 =
1 1 t + 2 t 2 t
2 t evolves deterministically t+1 is normally distributed with mean E(t+1 |t , at ) = t 2 Note that var(t+1 |t , at ) 6= t +1
42 / 83
Model solution
I I I I
Reintroduce j subscript Priors are independent, and t = (1t , . . . , Jt ) Recall that jt = j + jt Expected utility:
2 2 uj (xt ) = E(uj (zt , jt )|t , zt ) = jt r2 jt r (jt + ) Pjt
j =1
f (j,t+1 |jt , at )
well-dened dynamic discrete choice model with unique choice-specic value functions characterizing optimal decisions
43 / 83
Estimation
I
Conditional on xt = (zt , t ) we can calculate the CCPs Pr{at = j |zt , t } = j (zt , t ) Data
I I
Qt = at , Qt = (Q0 , . . . , Qt ), and Q = (Q0 , . . . , QT ) z t = (z0 , . . . , zt ) and z = (z0 , . . . , zT ) Estimated from data in a preliminary estimation step
Need to integrate out the unobserved states from the likelihood function
44 / 83
Dene t = (1t , . . . , Jt ) and = (0 , . . . , T 1 ) Conditional on j (an estimated parameter) and , we know the signals jt = j + jt , 2 ) By assumption j 0 (j ) = N (j 0 , 2 ) = N (
j0 0
1 1 + 2 2 jt 1 1 + 2 2 jt
!1 !1
j,t+1 =
1 1 jt + 2 jt 2 jt
45 / 83
Likelihood function
I I I
We know from the discussion above that Pr{at = j |zt , t ; } = Pr{at = j |Qt1 , z t , ; } Z
T Y
l ( |Q, z ) f ( Q|z ; ) =
I I
t=0
Pr{Qt |Qt1 , z t , ; } f ( ; )d
Likelihood components conditionally independent across households Q given joint likelihood is l(|Q, z ) = i li (|Qi , z i )
46 / 83
General approach
1. Find a vector of random variables with density p( ; ) that allows to reconstruct the sequence of unobserved states 2. Formulate the likelihood conditional on 3. Integrate over to calculate the likelihood conditional on data and only
Note
I
47 / 83
Initial conditions
I I
Assumption so far: The rst sample period is the rst period when the consumers start buying the products What if the consumers made product choices before the sample period?
I
I I
Split the sample periods into two parts, t = 0, . . . , T0 1 and t = T0 , . . . , T Make some assumption about the prior in period t = 0 Simulate the evolution of the priors conditional on the observed product choices and for periods t T0 Conditional on the simulated draw of j 0 () formulate the likelihood using data from periods t = T0 , . . . , T The underlying assumption is that T0 is large enough such that the eect of the initial condition in t = 0 vanishes
48 / 83
Note
I
Unobserved heterogeneity
I
The most common methods to incorporate unobserved heterogeneity in dynamic discrete choice models are based on a nite mixture or latent class approach Assume there is a nite number of M consumer/household types, each characterized by a parameter m Let m be the fraction of consumers of type m in the population If there is no correlation between a consumers type and the initial state, we can dene the likelihood contribution for i as li ( Q i ; , ) =
I
I I I
m=1
M X
l i ( Q i ; m ) m
Here, = (1 , . . . , M 1 )
49 / 83
Unobserved heterogeneity
I
Matters are more complicated if individuals are systematically in specic states depending on their type
I
For example, preference for a brand may be correlated with product experience or stockpiling of that brand
I I
An easy solution (Heckman 1981) is to form an auxiliary model for this probability, i.e. make m (xi1 ; ) a parametrically specied function with to be estimated Example: m ( x i 1 ; ) = exp(xi1 m ) PM 1 1 + n=1 exp(xi1 n ) for m = 1, . . . , M 1
50 / 83
Continuous types
I I
We now consider a more general form of consumer heterogeneity, which allows for a continuous distribution of types Consumer is parameter vector i is a function of a common component and some idiosyncratic component i , which is drawn from a distribution with density ()
I
is known, i.e., does not depend on any parameters that are estimated
51 / 83
Continuous types
I
Given (i , ), we can compute the choice specic value functions, calculate the CCPs Pr{ait |xit ; i , }, and then obtain the likelihood contribution for individual i : Z li ( Q i ; ) = li ( Q i ; , ) ( ) d If the integral is high-dimensional, we need to calculate it by simulation ( (r) ) 1 X li ( Q i ; ) li ( Q i ; (r ) , ) R r=1
R
The problem with this approach: instead of just re-solving for the value functions once when we change , we need to re-solve for the value function R times
52 / 83
Ackerberg (2009) proposes a method based on a change of variables and importance sampling to overcome this computational challenge (see Hartmann 2006 for an application) Assume i = (i , ) (could additionally allow parameters to be a function of household characteristics) i fully summarizes the behavior (choice-specic value functions) of household i Let p(i |) be the density of i
I
I I I
53 / 83
Then li ( Q i ; ) = = = Z Z Z li ( Q i ; , ) ( ) d l i ( Q i ; ) p( | ) d li ( Q i ; ) p( | ) g ( ) d g ( )
I I
In the second line we use a change of variables In the third line we use importance sampling
54 / 83
I I
Note that (r) is drawn from g, a distribution that does not depend on the parameter vector Hence, as we change , only the weights p((r) |)/g ((r) ) change, but not li !
I
Intuition behind this approach: Change only the weights on dierent possible household types, not directly the households as we search for an optimal Based on the simulated likelihood function, we can dene an SML (simulated maximum likelihood) estimator for
55 / 83
Example: A rm launches a product, but is uncertain about its protability Firms prots from the product given by {L , H } could be either negative, L < 0, or positive, H > 0 t : prior probability that prots are negative, t = Pr{ = L } Firm decisions at {0, 1}
I I
at = 0 denotes that the rm scraps the product, and receives the payo 0 at = 1 denotes that the rm stays in the market, and receives the payo
Assumption: If the rm stays in the market it observes prots and immediately learns the true value of
56 / 83
I I
No latent payo terms jt in this model Suppose that the rm has information that the product is not protable, that is E(|t ) = t L + (1 t )H < 0
57 / 83
If the rm only cares about current prots, = 0, then it should scrap the product: u 1 ( t ) < 0 = u 0 ( t )
I I
But what if > 0? Lets approach this considering the impact of the current decision on future information, and the optimal decision that the rm can take based on future information
58 / 83
If the rm stays in the market and sells the product, it learns the true level of prots
I I
Hence, in the next period, t + 1, either t+1 = 0 or t+1 = 1 No more uncertainty about the prot level
For the case of certainty we can easily calculate the value function: v (1) = 0 v (0) = 1 H 1
59 / 83
1 H > 0 = v 0 ( t ) 1
Example
I I I
L = 1 and H = 1 Static decision making: Product will be scrapped i t 0.5 Suppose the rm is forward-looking, = 0.9. Then the product will be scrapped i t 10/11 0.909
60 / 83
Overview
The Estimation Problem; Structural vs Reduced Form of the Model Identication Nested Fixed Point (NFP) Estimators Unobserved State Variables Example: The Bayesian Learning Model Model Estimation in the Presence of Unobserved State Variables Unobserved Heterogeneity Postscript: Sequential vs. Myopic Learning The MPEC Approach to ML Estimation Example: Estimation of the Durable Goods Adoption Problem Using NFP and MPEC Approaches Other Topics References
61 / 83
Overview
Su and Judd (2012) propose an alternative approach to obtain the ML estimator for a dynamic decision process
I
Their method also works and has additional advantages for games
They note that the nested xed point approach is really only a special way of nding the solution of a more general constrained optimization problem
I
There will often be better algorithms to solve an MPEC problem, which allows for faster and/or more robust computation of the ML estimate compared to the NFP approach
62 / 83
Implementation details
I I
Lets be precise on the computational steps we take to calculate the likelihood function We calculate the expected value function, w(x; ), in order to derive the choice probabilities which allow us to match model predictions and observations On our computer, we will use some interpolation or approximation method to represent w
I I
Representation will depend on a set of parameters, represents the value of w at specic state points (interpolation), or coecients on basis functions (approximation) On our computer, completely denes w, w
63 / 83
Implementation details
I I
The expected value function satises w = (w; ) We can alternatively express this relationship as = ( ; )
I
Value function iteration on our computer proceeds by computing the sequence (n+1) = ( (n) ; ), given some starting value (0)
64 / 83
Implementation details
I
Using the expected value function, w , calculate the choice-specic values, Z vj (x; ) = uj (x; ) + w(x0 ; )f (x0 |x, j ; ) dx0 Then calculate the CCPs
I I
The equation above presumes that we use w = (w; ) to calculate the choice-specic values But we could alternatively use some arbitrary guess for w
I I
Let be the parameter vector that summarizes some arbitrary w Let vj (x; , ) be the corresponding choice-specic value function
65 / 83
MPEC approach
I
We can use vj (x; , ) to calculate the CCPs and then the augmented likelihood function, l ( Q; , )
This expressions clearly shows what the likelihood function depends on:
I I
Parameter values The expected value function w describing how the decision maker behaves
Our assumption of rational behavior imposes that the decision maker does not follow some arbitrary decision rule, but rather the rule corresponding to = ( ; ), denoted by = ()
66 / 83
MPEC approach
I
We can thus express the likelihood estimation problem in its general form: max log(l(Q; , ))
( , )
s.t. ( ; ) = 0
I I
Solve for the parameter vector (, ) The constraint is an equilibrium constraint, thus the term MPEC
The nested xed point algorithm is a special way of formulating this problem: max log(l(Q; )) log(l(Q; , ()))
67 / 83
Discussion
Avoid having to nd an exact solution of the value function, = ( ; ), for each speed advantage The MPEC formulation allows for more robust convergence to the solution, because derivatives (of the objective and constraint) are easier to compute than in the NFP approach
68 / 83
Discussion
Isnt a large state space with many elements (say 100,000) an obvious obstacle to using the MPEC approach?
I I
Good solvers, such as SNOPT or KNITRO can handle such problems Will require a sparse Jacobian of the constraint, i.e. ( ; ) needs to be sparse Will require that we are able to compute the Jacobian, ( ; ), which is an L L matrix, where L is the number of elements in
69 / 83
Discussion
Analytic derivatives can be cumbersome and error-prone Automatic dierentiation (AD), available in C, C++, FORTRAN, and MATLAB virtually no extra programming eort
I I
Works ne for small-medium scale problems May be dicult to implement (computer speed/memory requirements) given currently available AD software for large-scale problems
70 / 83
Overview
The Estimation Problem; Structural vs Reduced Form of the Model Identication Nested Fixed Point (NFP) Estimators Unobserved State Variables Example: The Bayesian Learning Model Model Estimation in the Presence of Unobserved State Variables Unobserved Heterogeneity Postscript: Sequential vs. Myopic Learning The MPEC Approach to ML Estimation Example: Estimation of the Durable Goods Adoption Problem Using NFP and MPEC Approaches Other Topics References
71 / 83
Overview
I I
Data are generated from the durable goods adoption model with two products discussed in Part I of the lecture We solve for w using bilinear interpolation
I
Exercise: Re-write the code using Chebyshev approximation, which is much more ecient in this example
72 / 83
Code documentation
Main.m
I I
Denes model parameters and price process parameters Sets values for interpolator and initializes Gauss-Hermite quadrature information If create_data=1, solves the decision process given parameters and simulates a new data set. Call to simulate_price_process to simulate prices and simulate_adoptions to simulate the corresponding adoption decisions. Plots aggregate adoption data based on the output of calculate_sales Use script display_DP_solution to show choice probabilities and relative choice-specic value functions, vj (x) v0 (x)
73 / 83
Code documentation
74 / 83
Code documentation
Based on iteration on the integrated value function to solve for the expect value function Bellman_equation_rhs updates the current guess of the expected value function Bellman_equation_rhs uses interpolate_2D when taking the expectation of the future value
75 / 83
Code documentation
NFP algorithm
I
Calls log_likelihood, which solves for the expected value function to calculate the CCPs Calls log_likelihood_augmented with w supplied as parameters Bellman_equation_constraint implements the equilibrium constraint, ( ; ) = 0
MPEC algorithm
I
76 / 83
Overview
The Estimation Problem; Structural vs Reduced Form of the Model Identication Nested Fixed Point (NFP) Estimators Unobserved State Variables Example: The Bayesian Learning Model Model Estimation in the Presence of Unobserved State Variables Unobserved Heterogeneity Postscript: Sequential vs. Myopic Learning The MPEC Approach to ML Estimation Example: Estimation of the Durable Goods Adoption Problem Using NFP and MPEC Approaches Other Topics References
77 / 83
Omitted topics
1. Two-step estimators: Attempt to alleviate the computational burden inherent in NFP and MPEC estimation approaches (e.g. Pesendorfer and Schmidt-Dengler 2008) 2. Bayesian estimation using a new algorithm combining MCMC and value function iteration (Imai, Jain, and Ching 2009; Norets 2009)
78 / 83
Overview
The Estimation Problem; Structural vs Reduced Form of the Model Identication Nested Fixed Point (NFP) Estimators Unobserved State Variables Example: The Bayesian Learning Model Model Estimation in the Presence of Unobserved State Variables Unobserved Heterogeneity Postscript: Sequential vs. Myopic Learning The MPEC Approach to ML Estimation Example: Estimation of the Durable Goods Adoption Problem Using NFP and MPEC Approaches Other Topics References
79 / 83
I Arcidiacono, P., and P. E. Ellickson (2011): Practical Methods for Estimation I Ackerberg, D. A. (2009): "A new use of importance sampling to reduce
of Dynamic Discrete Choice Models, Annual Review of Economics, 3, 363-394 computational burden in simulation estimation," Quantitative Marketing and Economics, 7(4), 343-376 models: A survey, Journal of Econometrics, 156, 38-67
I Aguirregabiria, V. and Mira, P. (2010): Dynamic discrete choice structural I Dub, J.-P., Hitsch, G. J., and Jindal, P. (2013): The Joint Identication of
Utility and Discount Functions from Stated Choice Data: An Application to Durable Goods Adoption, manuscript
I Fang, H., and Y. Wang (2013): Estimating Dynamic Discrete Choice Models
with Hyperbolic Discounting, with an Application to Mammography Decisions, manuscript Initial Conditions in Estimating a Discrete Time-Discrete Data Stochastic Process, in C. Manski and D. L. McFadden (eds.): Structural Analysis of Discrete Data with Econometric Applications. MIT Press
80 / 83
Estimation of Dynamic Models, Review of Economic Studies, 60, 497-529 Processes, Econometrica, 70(2), 801-816
I Magnac, T. and Thesmar, D. (2002): Identifying Dynamic Discrete Decision I Reiss, P. C., and F. A. Wolak (2007): Structural Econometric Modeling:
Rationales and Examples from Industrial Organization, in Handbook of Econometrics, Vol. 6A, ed. by J. J. Heckman and E. E. Leamer. Elsevier B. V. Model of Harold Zurcher, Econometrica, 55 (5), 999-1033
I Rust, J. (1987): Optimal Replacement of GMC Bus Engines: An Empirical I Rust, J. (1994): Structural Estimation of Markov Decision Processes, in
Handbook of Econometrics, Vol. IV, ed. by R. F. Engle and D. L. McFadden. Amsterdam and New York: North-Holland Estimation of Structural Models, Econometrica, 80(5), 2213-2230
I Su, C.-L., and K. L. Judd (2012): Constrained Optimization Approaches to I Yao, S., C. F. Mela, J. Chiang, and Y. Chen (2012): Determining Consumers
Discount Rates with Field Studies, Journal of Marketing Research, 49(6), 822-841
81 / 83
Algorithm: A Class of Estimators for Discrete Markov Decision Models, Econometrica, 70 (4), 1519-1543 Discrete Games, Econometrica, 75(1), 1-53
I Aguirregabiria, V. and Mira, P. (2007): Sequential Estimation of Dynamic I Arcidiacono, P. and Miller, R. (2008): CCP Estimation of Dynamic Discrete
Choice Models with Unobserved Heterogeneity, manuscript Imperfect Competition, Econometrica, 75 (5), 13311370 Discrete Choice Models, Econometrica, 77(6), 1865-1899
I Bajari, P., L. Benkard, and J. Levin (2007): Estimating Dynamic Models of I Imai, S., Jain, N., and Ching, A. (2009): Bayesian Estimation of Dynamic I Norets, A. (2009): Inference in Dynamic Discrete Choice Models With Serially
82 / 83
Other Applications
I Erdem, T., and M. P. Keane (1996): Decision-Making Under Uncertainty:
Capturing Brand Choice Processes in Turbulent Consumer Goods Markets, Marketing Science, 15 (1), 120 Implications for Demand Elasticity Estimates, Quantitative Marketing and Economics, 4, 325-349
create switching costs? A dynamic structural analysis of demand in a reward program," Quantitative Marketing and Economics, 6(2), 109-137 and Exit Under Demand Uncertainty," Marketing Science, 25(1), 25-50
I Hitsch, G. J. (2006): "An Empirical Model of Optimal Dynamic Product Launch I Misra, S. and Nair, H. (2011): "A Structural Model of Sales-Force
Compensation Dynamics: Estimation and Field Implementation," Quantitative Marketing and Economics, 9, 211-257
83 / 83