ترغب بنشر مسار تعليمي؟ اضغط هنا

Finite-Sample Average Bid Auction

61   0   0.0 ( 0 )
 نشر من قبل Haitian Xie
 تاريخ النشر 2020
  مجال البحث اقتصاد
والبحث باللغة English
 تأليف Haitian Xie




اسأل ChatGPT حول البحث

The paper studies the problem of auction design in a setting where the auctioneer accesses the knowledge of the valuation distribution only through statistical samples. A new framework is established that combines the statistical decision theory with mechanism design. Two optimality criteria, maxmin, and equivariance, are studied along with their implications on the form of auctions. The simplest form of the equivariant auction is the average bid auction, which set individual reservation prices proportional to the average of other bids and historical samples. This form of auction can be motivated by the Gamma distribution, and it sheds new light on the estimation of the optimal price, an irregular parameter. Theoretical results show that it is often possible to use the regular parameter population mean to approximate the optimal price. An adaptive average bid estimator is developed under this idea, and it has the same asymptotic properties as the empirical Myerson estimator. The new proposed estimator has a significantly better performance in terms of value at risk and expected shortfall when the sample size is small.

قيم البحث

اقرأ أيضاً

We provide a finite sample inference method for the structural parameters of a semiparametric binary response model under a conditional median restriction originally studied by Manski (1975, 1985). Our inference method is valid for any sample size an d irrespective of whether the structural parameters are point identified or partially identified, for example due to the lack of a continuously distributed covariate with large support. Our inference approach exploits distributional properties of observable outcomes conditional on the observed sequence of exogenous variables. Moment inequalities conditional on this size n sequence of exogenous covariates are constructed, and the test statistic is a monotone function of violations of sample moment inequalities. The critical value used for inference is provided by the appropriate quantile of a known function of n independent Rademacher random variables. We investigate power properties of the underlying test and provide simulation studies to support the theoretical findings.
Economists are often interested in estimating averages with respect to distributions of unobservables, such as moments of individual fixed-effects, or average partial effects in discrete choice models. For such quantities, we propose and study poster ior average effects (PAE), where the average is computed conditional on the sample, in the spirit of empirical Bayes and shrinkage methods. While the usefulness of shrinkage for prediction is well-understood, a justification of posterior conditioning to estimate population averages is currently lacking. We show that PAE have minimum worst-case specification error under various forms of misspecification of the parametric distribution of unobservables. In addition, we introduce a measure of informativeness of the posterior conditioning, which quantifies the worst-case specification error of PAE relative to parametric model-based estimators. As illustrations, we report PAE estimates of distributions of neighborhood effects in the US, and of permanent and transitory components in a model of income dynamics.
The empirical analysis of discrete complete-information games has relied on behavioral restrictions in the form of solution concepts, such as Nash equilibrium. Choosing the right solution concept is crucial not just for identification of payoff param eters, but also for the validity and informativeness of counterfactual exercises and policy implications. We say that a solution concept is discernible if it is possible to determine whether it generated the observed data on the players behavior and covariates. We propose a set of conditions that make it possible to discern solution concepts. In particular, our conditions are sufficient to tell whether the players choices emerged from Nash equilibria. We can also discern between rationalizable behavior, maxmin behavior, and collusive behavior. Finally, we identify the correlation structure of unobserved shocks in our model using a novel approach.
We study the rise in the acceptability fiat money in a Kiyotaki-Wright economy by developing a method that can determine dynamic Nash equilibria for a class of search models with genuine heterogenous agents. We also address open issues regarding the stability properties of pure strategies equilibria and the presence of multiple equilibria. Experiments illustrate the liquidity conditions that favor the transition from partial to full acceptance of fiat money, and the effects of inflationary shocks on production, liquidity, and trade.
This paper introduces the targeted sampling model in optimal auction design. In this model, the seller may specify a quantile interval and sample from a buyers prior restricted to the interval. This can be interpreted as allowing the seller to, for e xample, examine the top $40$ percents bids from previous buyers with the same characteristics. The targeting power is quantified with a parameter $Delta in [0, 1]$ which lower bounds how small the quantile intervals could be. When $Delta = 1$, it degenerates to Cole and Roughgardens model of i.i.d. samples; when it is the idealized case of $Delta = 0$, it degenerates to the model studied by Chen et al. (2018). For instance, for $n$ buyers with bounded values in $[0, 1]$, $tilde{O}(epsilon^{-1})$ targeted samples suffice while it is known that at least $tilde{Omega}(n epsilon^{-2})$ i.i.d. samples are needed. In other words, targeted sampling with sufficient targeting power allows us to remove the linear dependence in $n$, and to improve the quadratic dependence in $epsilon^{-1}$ to linear. In this work, we introduce new technical ingredients and show that the number of targeted samples sufficient for learning an $epsilon$-optimal auction is substantially smaller than the sample complexity of i.i.d. samples for the full spectrum of $Delta in [0, 1)$. Even with only mild targeting power, i.e., whenever $Delta = o(1)$, our targeted sample complexity upper bounds are strictly smaller than the optimal sample complexity of i.i.d. samples.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا