ترغب بنشر مسار تعليمي؟ اضغط هنا

The Pareto Frontier of Inefficiency in Mechanism Design

62   0   0.0 ( 0 )
 نشر من قبل Yiannis Giannakopoulos
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We study the trade-off between the Price of Anarchy (PoA) and the Price of Stability (PoS) in mechanism design, in the prototypical problem of unrelated machine scheduling. We give bounds on the space of feasible mechanisms with respect to the above metrics, and observe that two fundamental mechanisms, namely the First-Price (FP) and the Second-Price (SP), lie on the two opposite extrema of this boundary. Furthermore, for the natural class of anonymous task-independent mechanisms, we completely characterize the PoA/PoS Pareto frontier; we design a class of optimal mechanisms $mathcal{SP}_alpha$ that lie exactly on this frontier. In particular, these mechanisms range smoothly, with respect to parameter $alphageq 1$ across the frontier, between the First-Price ($mathcal{SP}_1$) and Second-Price ($mathcal{SP}_infty$) mechanisms. En route to these results, we also provide a definitive answer to an important question related to the scheduling problem, namely whether non-truthful mechanisms can provide better makespan guarantees in the equilibrium, compared to truthful ones. We answer this question in the negative, by proving that the Price of Anarchy of all scheduling mechanisms is at least $n$, where $n$ is the number of machines.

قيم البحث

اقرأ أيضاً

For iid $d$-dimensional observations $X^{(1)}, X^{(2)}, ldots$ with independent Exponential$(1)$ coordinates, consider the boundary (relative to the closed positive orthant), or frontier, $F_n$ of the closed Pareto record-setting (RS) region [ mbox{R S}_n := {0 leq x in {mathbb R}^d: x otprec X^{(i)} mbox{for all $1 leq i leq n$}} ] at time $n$, where $0 leq x$ means that $0 leq x_j$ for $1 leq j leq d$ and $x prec y$ means that $x_j < y_j$ for $1 leq j leq d$. With $x_+ := sum_{j = 1}^d x_j$, let [ F_n^- := min{x_+: x in F_n} quad mbox{and} quad F_n^+ := max{x_+: x in F_n}, ] and define the width of $F_n$ as [ W_n := F_n^+ - F_n^-. ] We describe typical and almost sure behavior of the processes $F^+$, $F^-$, and $W$. In particular, we show that $F^+_n sim ln n sim F^-_n$ almost surely and that $W_n / ln ln n$ converges in probability to $d - 1$; and for $d geq 2$ we show that, almost surely, the set of limit points of the sequence $W_n / ln ln n$ is the interval $[d - 1, d]$. We also obtain modifications of our results that are important in connection with efficient simulation of Pareto records. Let $T_m$ denote the time that the $m$th record is set. We show that $F^+_{T_m} sim (d! m)^{1/d} sim F^-_{T_m}$ almost surely and that $W_{T_m} / ln m$ converges in probability to $1 - d^{-1}$; and for $d geq 2$ we show that, almost surely, the sequence $W_{T_m} / ln m$ has $liminf$ equal to $1 - d^{-1}$ and $limsup$ equal to $1$.
Nearly fifteen years ago, Google unveiled the generalized second price (GSP) auction. By all theoretical accounts including their own [Varian 14], this was the wrong auction --- the Vickrey-Clarke-Groves (VCG) auction would have been the proper choic e --- yet GSP has succeeded spectacularly. We give a deep justification for GSPs success: advertisers preferences map to a model we call value maximization, they do not maximize profit as the standard theory would believe. For value maximizers, GSP is the truthful auction [Aggarwal 09]. Moreover, this implies an axiomatization of GSP --- it is an auction whose prices are truthful for value maximizers --- that can be applied much more broadly than the simple model for which GSP was originally designed. In particular, applying it to arbitrary single-parameter domains recovers the folklore definition of GSP. Through the lens of value maximization, GSP metamorphosizes into a powerful auction, sound in its principles and elegant in its simplicity.
Game theory is often used as a tool to analyze decentralized systems and their properties, in particular, blockchains. In this note, we take the opposite view. We argue that blockchains can and should be used to implement economic mechanisms because they can help to overcome problems that occur if trust in the mechanism designer cannot be assumed. Mechanism design deals with the allocation of resources to agents, often by extracting private information from them. Some mechanisms are immune to early information disclosure, while others may heavily depend on it. Some mechanisms have to randomize to achieve fairness and efficiency. Both issues, information disclosure, and randomness require trust in the mechanism designer. If there is no trust, mechanisms can be manipulated. We claim that mechanisms that use randomness or sequential information disclosure are much harder, if not impossible, to audit. Therefore, centralized implementation is often not a good solution. We consider some of the most frequently used mechanisms in practice and identify circumstances under which manipulation is possible. We propose a decentralized implementation of such mechanisms, that can be, in practical terms, realized by blockchain technology. Moreover, we argue in which environments a decentralized implementation of a mechanism brings a significant advantage.
We study Bayesian automated mechanism design in unstructured dynamic environments, where a principal repeatedly interacts with an agent, and takes actions based on the strategic agents report of the current state of the world. Both the principal and the agent can have arbitrary and potentially different valuations for the actions taken, possibly also depending on the actual state of the world. Moreover, at any time, the state of the world may evolve arbitrarily depending on the action taken by the principal. The goal is to compute an optimal mechanism which maximizes the principals utility in the face of the self-interested strategic agent. We give an efficient algorithm for computing optimal mechanisms, with or without payments, under different individual-rationality constraints, when the time horizon is constant. Our algorithm is based on a sophisticated linear program formulation, which can be customized in various ways to accommodate richer constraints. For environments with large time horizons, we show that the principals optimal utility is hard to approximate within a certain constant factor, complementing our algorithmic result. We further consider a special case of the problem where the agent is myopic, and give a refined efficient algorithm whose time complexity scales linearly in the time horizon. Moreover, we show that memoryless mechanisms do not provide a good solution for our problem, in terms of both optimality and computational tractability. These results paint a relatively complete picture for automated dynamic mechanism design in unstructured environments. Finally, we present experimental results where our algorithms are applied to synthetic dynamic environments with different characteristics, which not only serve as a proof of concept for our algorithms, but also exhibit intriguing phenomena in dynamic mechanism design.
The Competition Complexity of an auction measures how much competition is needed for the revenue of a simple auction to surpass the optimal revenue. A classic result from auction theory by Bulow and Klemperer [9], states that the Competition Complexi ty of VCG, in the case of n i.i.d. buyers and a single item, is 1, i.e., it is better to recruit one extra buyer and run a second price auction than to learn exactly the buyers underlying distribution and run the revenue-maximizing auction tailored to this distribution. In this paper we study the Competition Complexity of dynamic auctions. Consider the following setting: a monopolist is auctioning off m items in m consecutive stages to n interested buyers. A buyer realizes her value for item k in the beginning of stage k. We prove that the Competition Complexity of dynamic auctions is at most 3n, and at least linear in n, even when the buyers values are correlated across stages, under a monotone hazard rate assumption on the stage (marginal) distributions. We also prove results on the number of additional buyers necessary for VCG at every stage to be an {alpha}-approximation of the optimal revenue; we term this number the {alpha}-approximate Competition Complexity. As a corollary we provide the first results on prior-independent dynamic auctions. This is, to the best of our knowledge, the first non-trivial positive guarantees for simple ex-post IR dynamic auctions for correlated stages. A key step towards proving bounds on the Competition Complexity is getting a good benchmark/upper bound to the optimal revenue. To this end, we extend the recent duality framework of Cai et al. [12] to dynamic settings. As an aside to our approach we obtain a revenue non-monotonicity lemma for dynamic auctions, which may be of independent interest.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا