ترغب بنشر مسار تعليمي؟ اضغط هنا

Lower Bounds for Prior Independent Algorithms

82   0   0.0 ( 0 )
 نشر من قبل Aleck Johnsen
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The prior independent framework for algorithm design considers how well an algorithm that does not know the distribution of its inputs approximates the expected performance of the optimal algorithm for this distribution. This paper gives a method that is agnostic to problem setting for proving lower bounds on the prior independent approximation factor of any algorithm. The method constructs a correlated distribution over inputs that can be generated both as a distribution over i.i.d. good-for-algorithms distributions and as a distribution over i.i.d. bad-for-algorithms distributions. Prior independent algorithms are upper-bounded by the optimal algorithm for the latter distribution even when the true distribution is the former. Thus, the ratio of the expected performances of the Bayesian optimal algorithms for these two decompositions is a lower bound on the prior independent approximation ratio. The techniques of the paper connect prior independent algorithm design, Yaos Minimax Principle, and information design. We apply this framework to give new lower bounds on several canonical prior independent mechanism design problems.

قيم البحث

اقرأ أيضاً

While recent work suggests that quantum computers can speed up the solution of semidefinite programs, little is known about the quantum complexity of more general convex optimization. We present a quantum algorithm that can optimize a convex function over an $n$-dimensional convex body using $tilde{O}(n)$ queries to oracles that evaluate the objective function and determine membership in the convex body. This represents a quadratic improvement over the best-known classical algorithm. We also study limitations on the power of quantum computers for general convex optimization, showing that it requires $tilde{Omega}(sqrt n)$ evaluation queries and $Omega(sqrt{n})$ membership queries.
We study social choice rules under the utilitarian distortion framework, with an additional metric assumption on the agents costs over the alternatives. In this approach, these costs are given by an underlying metric on the set of all agents plus alt ernatives. Social choice rules have access to only the ordinal preferences of agents but not the latent cardinal costs that induce them. Distortion is then defined as the ratio between the social cost (typically the sum of agent costs) of the alternative chosen by the mechanism at hand, and that of the optimal alternative chosen by an omniscient algorithm. The worst-case distortion of a social choice rule is, therefore, a measure of how close it always gets to the optimal alternative without any knowledge of the underlying costs. Under this model, it has been conjectured that Ranked Pairs, the well-known weighted-tournament rule, achieves a distortion of at most 3 [Anshelevich et al. 2015]. We disprove this conjecture by constructing a sequence of instances which shows that the worst-case distortion of Ranked Pairs is at least 5. Our lower bound on the worst case distortion of Ranked Pairs matches a previously known upper bound for the Copeland rule, proving that in the worst case, the simpler Copeland rule is at least as good as Ranked Pairs. And as long as we are limited to (weighted or unweighted) tournament rules, we demonstrate that randomization cannot help achieve an expected worst-case distortion of less than 3. Using the concept of approximate majorization within the distortion framework, we prove that Copeland and Randomized Dictatorship achieve low constant factor fairness-ratios (5 and 3 respectively), which is a considerable generalization of similar results for the sum of costs and single largest cost objectives. In addition to all of the above, we outline several interesting directions for further research in this space.
We study the problem of PAC learning one-hidden-layer ReLU networks with $k$ hidden units on $mathbb{R}^d$ under Gaussian marginals in the presence of additive label noise. For the case of positive coefficients, we give the first polynomial-time algo rithm for this learning problem for $k$ up to $tilde{O}(sqrt{log d})$. Previously, no polynomial time algorithm was known, even for $k=3$. This answers an open question posed by~cite{Kliv17}. Importantly, our algorithm does not require any assumptions about the rank of the weight matrix and its complexity is independent of its condition number. On the negative side, for the more general task of PAC learning one-hidden-layer ReLU networks with arbitrary real coefficients, we prove a Statistical Query lower bound of $d^{Omega(k)}$. Thus, we provide a separation between the two classes in terms of efficient learnability. Our upper and lower bounds are general, extending to broader families of activation functions.
The problem of scheduling unrelated machines by a truthful mechanism to minimize the makespan was introduced in the seminal Algorithmic Mechanism Design paper by Nisan and Ronen. Nisan and Ronen showed that there is a truthful mechanism that provides an approximation ratio of $min(m,n)$, where $n$ is the number of machines and $m$ is the number of jobs. They also proved that no truthful mechanism can provide an approximation ratio better than $2$. Since then, the lower bound was improved to $1 +sqrt 2 approx 2.41$ by Christodoulou, Kotsoupias, and Vidali, and then to $1+phiapprox 2.618$ by Kotsoupias and Vidali. Very recently, the lower bound was improved to $2.755$ by Giannakopoulos, Hammerl, and Pocas. In this paper we further improve the bound to $3-delta$, for every constant $delta>0$. Note that a gap between the upper bound and the lower bounds exists even when the number of machines and jobs is very small. In particular, the known $1+sqrt{2}$ lower bound requires at least $3$ machines and $5$ jobs. In contrast, we show a lower bound of $2.2055$ that uses only $3$ machines and $3$ jobs and a lower bound of $1+sqrt 2$ that uses only $3$ machines and $4$ jobs. For the case of two machines and two jobs we show a lower bound of $2$. Similar bounds for two machines and two jobs were known before but only via complex proofs that characterized all truthful mechanisms that provide a finite approximation ratio in this setting, whereas our new proof uses a simple and direct approach.
This paper compares two leading approaches for robust optimization in the models of online algorithms and mechanism design. Competitive analysis compares the performance of an online algorithm to an offline benchmark in worst-case over inputs, and pr ior-independent mechanism design compares the expected performance of a mechanism on an unknown distribution (of inputs, i.e., agent values) to the optimal mechanism for the distribution in worst case over distributions. For competitive analysis, a critical concern is the choice of benchmark. This paper gives a method for selecting a good benchmark. We show that optimal algorithm/mechanism for the optimal benchmark are equal to the prior-independent optimal algorithm/mechanism. We solve a central open question in prior-independent mechanism design, namely we identify the prior-independent revenue-optimal mechanism for selling a single item to two agents with i.i.d. and regularly distributed values. Via this solution and the above equivalence of prior-independent mechanism design and competitive analysis (a.k.a. prior-free mechanism design) we show that the standard method for lower bounds of prior-free mechanisms is not generally tight for the benchmark design program.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا