No Arabic abstract
We consider the problem of fair allocation of indivisible items among $n$ agents with additive valuations, when agents have equal entitlements to the goods, and there are no transfers. Best-of-Both-Worlds (BoBW) fairness mechanisms aim to give all agents both an ex-ante guarantee (such as getting the proportional share in expectation) and an ex-post guarantee. Prior BoBW results have focused on ex-post guarantees that are based on the up to one item paradigm, such as envy-free up to one item (EF1). In this work we attempt to give every agent a high value ex-post, and specifically, a constant fraction of his maximin share (MMS). The up to one item paradigm fails to give such a guarantee, and it is not difficult to present examples in which previous BoBW mechanisms give agents only a $frac{1}{n}$ fraction of their MMS. Our main result is a deterministic polynomial time algorithm that computes a distribution over allocations that is ex-ante proportional, and ex-post, every allocation gives every agent at least his proportional share up to one item, and more importantly, at least half of his MMS. Moreover, this last ex-post guarantee holds even with respect to a more demanding notion of a share, introduced in this paper, that we refer to as the truncated proportional share (TPS). Our guarantees are nearly best possible, in the sense that one cannot guarantee agents more than their proportional share ex-ante, and one cannot guarantee agents more than a $frac{n}{2n-1}$ fraction of their TPS ex-post.
We consider the problem of fair allocation of indivisible goods to $n$ agents, with no transfers. When agents have equal entitlements, the well established notion of the maximin share (MMS) serves as an attractive fairness criterion, where to qualify as fair, an allocation needs to give every agent at least a substantial fraction of her MMS. In this paper we consider the case of arbitrary (unequal) entitlements. We explain shortcomings in previous attempts that extend the MMS to unequal entitlements. Our conceptual contribution is the introduction of a new notion of a share, the AnyPrice share (APS), that is appropriate for settings with arbitrary entitlements. Even for the equal entitlements case, this notion is new, and satisfies $APS ge MMS$, where the inequality is sometimes strict. We present two equivalent definitions for the APS (one as a minimization problem, the other as a maximization problem), and provide comparisons between the APS and previous notions of fairness. Our main result concerns additive valuations and arbitrary entitlements, for which we provide a polynomial-time algorithm that gives every agent at least a $frac{3}{5}$-fraction of her APS. This algorithm can also be viewed as providing strategies in a certain natural bidding game, and these strategies secure each agent at least a $frac{3}{5}$-fraction of her APS.
We study the problem of computing maximin share guarantees, a recently introduced fairness notion. Given a set of $n$ agents and a set of goods, the maximin share of a single agent is the best that she can guarantee to herself, if she would be allowed to partition the goods in any way she prefers, into $n$ bundles, and then receive her least desirable bundle. The objective then in our problem is to find a partition, so that each agent is guaranteed her maximin share. In settings with indivisible goods, such allocations are not guaranteed to exist, so we resort to approximation algorithms. Our main result is a $2/3$-approximation, that runs in polynomial time for any number of agents. This improves upon the algorithm of Procaccia and Wang, which also produces a $2/3$-approximation but runs in polynomial time only for a constant number of agents. To achieve this, we redesign certain parts of their algorithm. Furthermore, motivated by the apparent difficulty, both theoretically and experimentally, in finding lower bounds on the existence of approximate solutions, we undertake a probabilistic analysis. We prove that in randomly generated instances, with high probability there exists a maximin share allocation. This can be seen as a justification of the experimental evidence reported in relevant works. Finally, we provide further positive results for two special cases that arise from previous works. The first one is the intriguing case of $3$ agents, for which it is already known that exact maximin share allocations do not always exist (contrary to the case of $2$ agents). We provide a $7/8$-approximation algorithm, improving the previously known result of $3/4$. The second case is when all item values belong to ${0, 1, 2}$, extending the ${0, 1}$ setting studied in Bouveret and Lema^itre. We obtain an exact algorithm for any number of agents in this case.
The seminal impossibility result of Myerson and Satterthwaite (1983) states that for bilateral trade, there is no mechanism that is individually rational (IR), incentive compatible (IC), weakly budget balanced, and efficient. This has led follow-up work on two-sided trade settings to weaken the efficiency requirement and consider approximately efficient simple mechanisms, while still demanding the other properties. The current state-of-the-art of such mechanisms for two-sided markets can be categorized as giving one (but not both) of the following two types of approximation guarantees on the gains from trade: a constant ex-ante guarantee, measured with respect to the second-best efficiency benchmark, or an asymptotically optimal ex-post guarantee, measured with respect to the first-best efficiency benchmark. Here the second-best efficiency benchmark refers to the highest gains from trade attainable by any IR, IC and weakly budget balanced mechanism, while the first-best efficiency benchmark refers to the maximum gains from trade (attainable by the VCG mechanism, which is not weakly budget balanced). In this paper, we construct simple mechanisms for double-auction and matching markets that simultaneously achieve both types of guarantees: these are ex-post IR, Bayesian IC, and ex-post weakly budget balanced mechanisms that 1) ex-ante guarantee a constant fraction of the gains from trade of the second-best, and 2) ex-post guarantee a realization-dependent fraction of the gains from trade of the first-best, such that this realization-dependent fraction converges to 1 (full efficiency) as the market grows large.
We study fair allocation of indivisible public goods subject to cardinality (budget) constraints. In this model, we have n agents and m available public goods, and we want to select $k leq m$ goods in a fair and efficient manner. We first establish fundamental connections between the models of private goods, public goods, and public decision making by presenting polynomial-time reductions for the popular solution concepts of maximum Nash welfare (MNW) and leximin. These mechanisms are known to provide remarkable fairness and efficiency guarantees in private goods and public decision making settings. We show that they retain these desirable properties even in the public goods case. We prove that MNW allocations provide fairness guarantees of Proportionality up to one good (Prop1), $1/n$ approximation to Round Robin Share (RRS), and the efficiency guarantee of Pareto Optimality (PO). Further, we show that the problems of finding MNW or leximin-optimal allocations are NP-hard, even in the case of constantly many agents, or binary valuations. This is in sharp contrast to the private goods setting that admits polynomial-time algorithms under binary valuations. We also design pseudo-polynomial time algorithms for computing an exact MNW or leximin-optimal allocation for the cases of (i) constantly many agents, and (ii) constantly many goods with additive valuations. We also present an O(n)-factor approximation algorithm for MNW which also satisfies RRS, Prop1, and 1/2-Prop.
The past year has witnessed rapid advances in sequence-to-sequence (seq2seq) modeling for Machine Translation (MT). The classic RNN-based approaches to MT were first out-performed by the convolutional seq2seq model, which was then out-performed by the more recent Transformer model. Each of these new approaches consists of a fundamental architecture accompanied by a set of modeling and training techniques that are in principle applicable to other seq2seq architectures. In this paper, we tease apart the new architectures and their accompanying techniques in two ways. First, we identify several key modeling and training techniques, and apply them to the RNN architecture, yielding a new RNMT+ model that outperforms all of the three fundamental architectures on the benchmark WMT14 English to French and English to German tasks. Second, we analyze the properties of each fundamental seq2seq architecture and devise new hybrid architectures intended to combine their strengths. Our hybrid models obtain further improvements, outperforming the RNMT+ model on both benchmark datasets.