ترغب بنشر مسار تعليمي؟ اضغط هنا

Making Decisions under Model Misspecification

374   0   0.0 ( 0 )
 نشر من قبل Fabio Angelo Maccheroni
 تاريخ النشر 2020
  مجال البحث اقتصاد
والبحث باللغة English




اسأل ChatGPT حول البحث

We use decision theory to confront uncertainty that is sufficiently broad to incorporate models as approximations. We presume the existence of a featured collection of what we call structured models that have explicit substantive motivations. The decision maker confronts uncertainty through the lens of these models, but also views these models as simplifications, and hence, as misspecified. We extend min-max analysis under model ambiguity to incorporate the uncertainty induced by acknowledging that the models used in decision-making are simplified approximations. Formally, we provide an axiomatic rationale for a decision criterion that incorporates model misspecification concerns.



قيم البحث

اقرأ أيضاً

We propose a new approach for solving a class of discrete decision making problems under uncertainty with positive cost. This issue concerns multiple and diverse fields such as engineering, economics, artificial intelligence, cognitive science and ma ny others. Basically, an agent has to choose a single or series of actions from a set of options, without knowing for sure their consequences. Schematically, two main approaches have been followed: either the agent learns which option is the correct one to choose in a given situation by trial and error, or the agent already has some knowledge on the possible consequences of his decisions; this knowledge being generally expressed as a conditional probability distribution. In the latter case, several optimal or suboptimal methods have been proposed to exploit this uncertain knowledge in various contexts. In this work, we propose following a different approach, based on the geometric intuition of distance. More precisely, we define a goal independent quasimetric structure on the state space, taking into account both cost function and transition probability. We then compare precision and computation time with classical approaches.
Starting from the Avellaneda-Stoikov framework, we consider a market maker who wants to optimally set bid/ask quotes over a finite time horizon, to maximize her expected utility. The intensities of the orders she receives depend not only on the sprea ds she quotes, but also on unobservable factors modelled by a hidden Markov chain. We tackle this stochastic control problem under partial information with a model that unifies and generalizes many existing ones under full information, combining several risk metrics and constraints, and using general decreasing intensity functionals. We use stochastic filtering, control and piecewise-deterministic Markov processes theory, to reduce the dimensionality of the problem and characterize the reduced value function as the unique continuous viscosity solution of its dynamic programming equation. We then solve the analogous full information problem and compare the results numerically through a concrete example. We show that the optimal full information spreads are biased when the exact market regime is unknown, and the market maker needs to adjust for additional regime uncertainty in terms of P&L sensitivity and observed order flow volatility. This effect becomes higher, the longer the waiting time in between orders.
Attributes provide critical information about the alternatives that a decision-maker is considering. When their magnitudes are uncertain, the decision-maker may be unsure about which alternative is truly the best, so measuring the attributes may help the decision-maker make a better decision. This paper considers settings in which each measurement yields one sample of one attribute for one alternative. When given a fixed number of samples to collect, the decision-maker must determine which samples to obtain, make the measurements, update prior beliefs about the attribute magnitudes, and then select an alternative. This paper presents the sample allocation problem for multiple attribute selection decisions and proposes two sequential, lookahead procedures for the case in which discrete distributions are used to model the uncertain attribute magnitudes. The two procedures are similar but reflect different quality measures (and loss functions), which motivate different decision rules: (1) select the alternative with the greatest expected utility and (2) select the alternative that is most likely to be the truly best alternative. We conducted a simulation study to evaluate the performance of the sequential procedures and hybrid procedures that first allocate some samples using a uniform allocation procedure and then use the sequential, lookahead procedure. The results indicate that the hybrid procedures are effective; allocating many (but not all) of the initial samples with the uniform allocation procedure not only reduces overall computational effort but also selects alternatives that have lower average opportunity cost and are more often truly best.
We propose a framework for estimation and inference when the model may be misspecified. We rely on a local asymptotic approach where the degree of misspecification is indexed by the sample size. We construct estimators whose mean squared error is min imax in a neighborhood of the reference model, based on one-step adjustments. In addition, we provide confidence intervals that contain the true parameter under local misspecification. As a tool to interpret the degree of misspecification, we map it to the local power of a specification test of the reference model. Our approach allows for systematic sensitivity analysis when the parameter of interest may be partially or irregularly identified. As illustrations, we study three applications: an empirical analysis of the impact of conditional cash transfers in Mexico where misspecification stems from the presence of stigma effects of the program, a cross-sectional binary choice model where the error distribution is misspecified, and a dynamic panel data binary choice model where the number of time periods is small and the distribution of individual effects is misspecified.
The log-concave projection is an operator that maps a d-dimensional distribution P to an approximating log-concave density. Prior work by D{u}mbgen et al. (2011) establishes that, with suitable metrics on the underlying spaces, this projection is con tinuous, but not uniformly continuous. In this work we prove a local uniform continuity result for log-concave projection -- in particular, establishing that this map is locally H{o}lder-(1/4) continuous. A matching lower bound verifies that this exponent cannot be improved. We also examine the implications of this continuity result for the empirical setting -- given a sample drawn from a distribution P, we bound the squared Hellinger distance between the log-concave projection of the empirical distribution of the sample, and the log-concave projection of P. In particular, this yields interesting statistical results for the misspecified setting, where P is not itself log-concave.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا