ترغب بنشر مسار تعليمي؟ اضغط هنا

Benchmark Design and Prior-independent Optimization

115   0   0.0 ( 0 )
 نشر من قبل Yingkai Li
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper compares two leading approaches for robust optimization in the models of online algorithms and mechanism design. Competitive analysis compares the performance of an online algorithm to an offline benchmark in worst-case over inputs, and prior-independent mechanism design compares the expected performance of a mechanism on an unknown distribution (of inputs, i.e., agent values) to the optimal mechanism for the distribution in worst case over distributions. For competitive analysis, a critical concern is the choice of benchmark. This paper gives a method for selecting a good benchmark. We show that optimal algorithm/mechanism for the optimal benchmark are equal to the prior-independent optimal algorithm/mechanism. We solve a central open question in prior-independent mechanism design, namely we identify the prior-independent revenue-optimal mechanism for selling a single item to two agents with i.i.d. and regularly distributed values. Via this solution and the above equivalence of prior-independent mechanism design and competitive analysis (a.k.a. prior-free mechanism design) we show that the standard method for lower bounds of prior-free mechanisms is not generally tight for the benchmark design program.

قيم البحث

اقرأ أيضاً

The prior independent framework for algorithm design considers how well an algorithm that does not know the distribution of its inputs approximates the expected performance of the optimal algorithm for this distribution. This paper gives a method tha t is agnostic to problem setting for proving lower bounds on the prior independent approximation factor of any algorithm. The method constructs a correlated distribution over inputs that can be generated both as a distribution over i.i.d. good-for-algorithms distributions and as a distribution over i.i.d. bad-for-algorithms distributions. Prior independent algorithms are upper-bounded by the optimal algorithm for the latter distribution even when the true distribution is the former. Thus, the ratio of the expected performances of the Bayesian optimal algorithms for these two decompositions is a lower bound on the prior independent approximation ratio. The techniques of the paper connect prior independent algorithm design, Yaos Minimax Principle, and information design. We apply this framework to give new lower bounds on several canonical prior independent mechanism design problems.
We study the problem of repeatedly auctioning off an item to one of $k$ bidders where: a) bidders have a per-round individual rationality constraint, b) bidders may leave the mechanism at any point, and c) the bidders valuations are adversarially cho sen (the prior-free setting). Without these constraints, the auctioneer can run a second-price auction to sell the business and receive the second highest total value for the entire stream of items. We show that under these constraints, the auctioneer can attain a constant fraction of the sell the business benchmark, but no more than $2/e$ of this benchmark. In the course of doing so, we design mechanisms for a single bidder problem of independent interest: how should you repeatedly sell an item to a (per-round IR) buyer with adversarial valuations if you know their total value over all rounds is $V$ but not how their value changes over time? We demonstrate a mechanism that achieves revenue $V/e$ and show that this is tight.
The Competition Complexity of an auction setting refers to the number of additional bidders necessary in order for the (deterministic, prior-independent, dominant strategy truthful) Vickrey-Clarke-Groves mechanism to achieve greater revenue than the (randomized, prior-dependent, Bayesian-truthful) optimal mechanism without the additional bidders. We prove that the competition complexity of $n$ bidders with additive valuations over $m$ independent items is at most $n(ln(1+m/n)+2)$, and also at most $9sqrt{nm}$. When $n leq m$, the first bound is optimal up to constant factors, even when the items are i.i.d. and regular. When $n geq m$, the second bound is optimal for the benchmark introduced in [EFFTW17a] up to constant factors, even when the items are i.i.d. and regular. We further show that, while the Eden et al. benchmark is not necessarily tight in the $n geq m$ regime, the competition complexity of $n$ bidders with additive valuations over even $2$ i.i.d. regular items is indeed $omega(1)$. Our main technical contribution is a reduction from analyzing the Eden et al. benchmark to proving stochastic dominance of certain random variables.
When developing and analyzing new hyperparameter optimization (HPO) methods, it is vital to empirically evaluate and compare them on well-curated benchmark suites. In this work, we list desirable properties and requirements for such benchmarks and pr opose a new set of challenging and relevant multifidelity HPO benchmark problems motivated by these requirements. For this, we revisit the concept of surrogate-based benchmarks and empirically compare them to more widely-used tabular benchmarks, showing that the latter ones may induce bias in performance estimation and ranking of HPO methods. We present a new surrogate-based benchmark suite for multifidelity HPO methods consisting of 9 benchmark collections that constitute over 700 multifidelity HPO problems in total. All our benchmarks also allow for querying of multiple optimization targets, enabling the benchmarking of multi-objective HPO. We examine and compare our benchmark suite with respect to the defined requirements and show that our benchmarks provide viable additions to existing suites.
Game theory is often used as a tool to analyze decentralized systems and their properties, in particular, blockchains. In this note, we take the opposite view. We argue that blockchains can and should be used to implement economic mechanisms because they can help to overcome problems that occur if trust in the mechanism designer cannot be assumed. Mechanism design deals with the allocation of resources to agents, often by extracting private information from them. Some mechanisms are immune to early information disclosure, while others may heavily depend on it. Some mechanisms have to randomize to achieve fairness and efficiency. Both issues, information disclosure, and randomness require trust in the mechanism designer. If there is no trust, mechanisms can be manipulated. We claim that mechanisms that use randomness or sequential information disclosure are much harder, if not impossible, to audit. Therefore, centralized implementation is often not a good solution. We consider some of the most frequently used mechanisms in practice and identify circumstances under which manipulation is possible. We propose a decentralized implementation of such mechanisms, that can be, in practical terms, realized by blockchain technology. Moreover, we argue in which environments a decentralized implementation of a mechanism brings a significant advantage.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا