Do you want to publish a course? Click here

Optimal Auction Design with Quantized Bids

94   0   0.0 ( 0 )
 Added by Nianxia Cao
 Publication date 2015
and research's language is English




Ask ChatGPT about the research

This letter considers the design of an auction mechanism to sell the object of a seller when the buyers quantize their private value estimates regarding the object prior to communicating them to the seller. The designed auction mechanism maximizes the utility of the seller (i.e., the auction is optimal), prevents buyers from communicating falsified quantized bids (i.e., the auction is incentive-compatible), and ensures that buyers will participate in the auction (i.e., the auction is individually-rational). The letter also investigates the design of the optimal quantization thresholds using which buyers quantize their private value estimates. Numerical results provide insights regarding the influence of the quantization thresholds on the auction mechanism.



rate research

Read More

The design of optimal auctions is a problem of interest in economics, game theory and computer science. Despite decades of effort, strategyproof, revenue-maximizing auction designs are still not known outside of restricted settings. However, recent methods using deep learning have shown some success in approximating optimal auctions, recovering several known solutions and outperforming strong baselines when optimal auctions are not known. In addition to maximizing revenue, auction mechanisms may also seek to encourage socially desirable constraints such as allocation fairness or diversity. However, these philosophical notions neither have standardization nor do they have widely accepted formal definitions. In this paper, we propose PreferenceNet, an extension of existing neural-network-based auction mechanisms to encode constraints using (potentially human-provided) exemplars of desirable allocations. In addition, we introduce a new metric to evaluate an auction allocations adherence to such socially desirable constraints and demonstrate that our proposed method is competitive with current state-of-the-art neural-network based auction designs. We validate our approach through human subject research and show that we are able to effectively capture real human preferences. Our code is available at https://github.com/neeharperi/PreferenceNet
This paper introduces the targeted sampling model in optimal auction design. In this model, the seller may specify a quantile interval and sample from a buyers prior restricted to the interval. This can be interpreted as allowing the seller to, for example, examine the top $40$ percents bids from previous buyers with the same characteristics. The targeting power is quantified with a parameter $Delta in [0, 1]$ which lower bounds how small the quantile intervals could be. When $Delta = 1$, it degenerates to Cole and Roughgardens model of i.i.d. samples; when it is the idealized case of $Delta = 0$, it degenerates to the model studied by Chen et al. (2018). For instance, for $n$ buyers with bounded values in $[0, 1]$, $tilde{O}(epsilon^{-1})$ targeted samples suffice while it is known that at least $tilde{Omega}(n epsilon^{-2})$ i.i.d. samples are needed. In other words, targeted sampling with sufficient targeting power allows us to remove the linear dependence in $n$, and to improve the quadratic dependence in $epsilon^{-1}$ to linear. In this work, we introduce new technical ingredients and show that the number of targeted samples sufficient for learning an $epsilon$-optimal auction is substantially smaller than the sample complexity of i.i.d. samples for the full spectrum of $Delta in [0, 1)$. Even with only mild targeting power, i.e., whenever $Delta = o(1)$, our targeted sample complexity upper bounds are strictly smaller than the optimal sample complexity of i.i.d. samples.
The design of revenue-maximizing auctions with strong incentive guarantees is a core concern of economic theory. Computational auctions enable online advertising, sourcing, spectrum allocation, and myriad financial markets. Analytic progress in this space is notoriously difficult; since Myersons 1981 work characterizing single-item optimal auctions, there has been limited progress outside of restricted settings. A recent paper by Dutting et al. circumvents analytic difficulties by applying deep learning techniques to, instead, approximate optimal auctions. In parallel, new research from Ilvento et al. and other groups has developed notions of fairness in the context of auction design. Inspired by these advances, in this paper, we extend techniques for approximating auctions using deep learning to address concerns of fairness while maintaining high revenue and strong incentive guarantees.
Designing an incentive compatible auction that maximizes expected revenue is a central problem in Auction Design. Theoretical approaches to the problem have hit some limits in the past decades and analytical solutions are known for only a few simple settings. Computational approaches to the problem through the use of LPs have their own set of limitations. Building on the success of deep learning, a new approach was recently proposed by Duetting et al. (2019) in which the auction is modeled by a feed-forward neural network and the design problem is framed as a learning problem. The neural architectures used in that work are general purpose and do not take advantage of any of the symmetries the problem could present, such as permutation equivariance. In this work, we consider auction design problems that have permutation-equivariant symmetry and construct a neural architecture that is capable of perfectly recovering the permutation-equivariant optimal mechanism, which we show is not possible with the previous architecture. We demonstrate that permutation-equivariant architectures are not only capable of recovering previous results, they also have better generalization properties.
We study the limits of an information intermediary in Bayesian auctions. Formally, we consider the standard single-item auction, with a revenue-maximizing seller and $n$ buyers with independent private values; in addition, we now have an intermediary who knows the buyers true values, and can map these to a public signal so as to try to increase buyer surplus. This model was proposed by Bergemann et al., who present a signaling scheme for the single-buyer setting that raises the optimal consumer surplus, by guaranteeing the item is always sold while ensuring the seller gets the same revenue as without signaling. Our work aims to understand how this result ports to the setting with multiple buyers. Our first result is an impossibility: We show that such a signaling scheme need not exist even for $n=2$ buyers with $2$-point valuation distributions. Indeed, no signaling scheme can always allocate the item to the highest-valued buyer while preserving any non-trivial fraction of the original consumer surplus; further, no signaling scheme can achieve consumer surplus better than a factor of $frac{1}{2}$ compared to the maximum achievable. These results are existential (and not computational) impossibilities, and thus provide a sharp separation between the single and multi-buyer settings. On the positive side, for discrete valuation distributions, we develop signaling schemes with good approximation guarantees for the consumer surplus compared to the maximum achievable, in settings where either the number of agents, or the support size of valuations, is small. Formally, for i.i.d. buyers, we present an $O(min(log n, K))$-approximation where $K$ is the support size of the valuations. Moreover, for general distributions, we present an $O(min(n log n, K^2))$-approximation. Our signaling schemes are conceptually simple and computable in polynomial (in $n$ and $K$) time.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا