No Arabic abstract
We consider an environment where sellers compete over buyers. All sellers are a-priori identical and strategically signal buyers about the product they sell. In a setting motivated by on-line advertising in display ad exchanges, where firms use second price auctions, a firms strategy is a decision about its signaling scheme for a stream of goods (e.g. user impressions), and a buyers strategy is a selection among the firms. In this setting, a single seller will typically provide partial information and consequently a product may be allocated inefficiently. Intuitively, competition among sellers may induce sellers to provide more information in order to attract buyers and thus increase efficiency. Surprisingly, we show that such a competition among firms may yield significant loss in consumers social welfare with respect to the monopolistic setting. Although we also show that in some cases the competitive setting yields gain in social welfare, we provide a tight bound on that gain, which is shown to be small in respect to the above possible loss. Our model is tightly connected with the literature on bundling in auctions.
Incentive compatibility (IC) is one of the most fundamental properties of an auction mechanism, including those used for online advertising. Recent methods by Feng et al. and Lahaie et al. show that counterfactual runs of the auction mechanism with different bids can be used to determine whether an auction is IC. In this paper we show that a similar result can be obtained by looking at the advertisers envy, which can be computed with one single execution of the auction. We introduce two metrics to evaluate the incentive-compatibility of an auction: IC-Regret and IC-Envy. For position auction environments, we show that for a large class of pricing schemes (which includes e.g. VCG and GSP), IC-Envy $ge$ IC-Regret (and IC-Envy = IC-Regret when bids are distinct). We consider non-separable discounts in the Ad Types environment of Colini-Baldeschi et al. where we show that for a generalization of GSP also IC-Envy $ge$ IC-Regret. Our final theoretical result is that in all these settings IC-Envy be used to bound the loss in social welfare due advertiser misreports. Finally, we show that IC-Envy is useful as a feature to predict IC-Regret in auction environments beyond the ones for which we show theoretical results. In particular, using IC-Envy yields better results than training models using only price and value features.
We consider the problem of approximating maximum Nash social welfare (NSW) while allocating a set of indivisible items to $n$ agents. The NSW is a popular objective that provides a balanced tradeoff between the often conflicting requirements of fairness and efficiency, defined as the weighted geometric mean of agents valuations. For the symmetric additive case of the problem, where agents have the same weight with additive valuations, the first constant-factor approximation algorithm was obtained in 2015. This led to a flurry of work obtaining constant-factor approximation algorithms for the symmetric case under mild generalizations of additive, and $O(n)$-approximation algorithms for more general valuations and for the asymmetric case. In this paper, we make significant progress towards both symmetric and asymmetric NSW problems. We present the first constant-factor approximation algorithm for the symmetric case under Rado valuations. Rado valuations form a general class of valuation functions that arise from maximum cost independent matching problems, including as special cases assignment (OXS) valuations and weighted matroid rank functions. Furthermore, our approach also gives the first constant-factor approximation algorithm for the asymmetric case under Rado valuations, provided that the maximum ratio between the weights is bounded by a constant.
We consider the problem of allocating a set of divisible goods to $N$ agents in an online manner, aiming to maximize the Nash social welfare, a widely studied objective which provides a balance between fairness and efficiency. The goods arrive in a sequence of $T$ periods and the value of each agent for a good is adversarially chosen when the good arrives. We first observe that no online algorithm can achieve a competitive ratio better than the trivial $O(N)$, unless it is given additional information about the agents values. Then, in line with the emerging area of algorithms with predictions, we consider a setting where for each agent, the online algorithm is only given a prediction of her monopolist utility, i.e., her utility if all goods were given to her alone (corresponding to the sum of her values over the $T$ periods). Our main result is an online algorithm whose competitive ratio is parameterized by the multiplicative errors in these predictions. The algorithm achieves a competitive ratio of $O(log N)$ and $O(log T)$ if the predictions are perfectly accurate. Moreover, the competitive ratio degrades smoothly with the errors in the predictions, and is surprisingly robust: the logarithmic competitive ratio holds even if the predictions are very inaccurate. We complement this positive result by showing that our bounds are essentially tight: no online algorithm, even if provided with perfectly accurate predictions, can achieve a competitive ratio of $O(log^{1-epsilon} N)$ or $O(log^{1-epsilon} T)$ for any constant $epsilon>0$.
Approximating the optimal social welfare while preserving truthfulness is a well studied problem in algorithmic mechanism design. Assuming that the social welfare of a given mechanism design problem can be optimized by an integer program whose integrality gap is at most $alpha$, Lavi and Swamy~cite{Lavi11} propose a general approach to designing a randomized $alpha$-approximation mechanism which is truthful in expectation. Their method is based on decomposing an optimal solution for the relaxed linear program into a convex combination of integer solutions. Unfortunately, Lavi and Swamys decomposition technique relies heavily on the ellipsoid method, which is notorious for its poor practical performance. To overcome this problem, we present an alternative decomposition technique which yields an $alpha(1 + epsilon)$ approximation and only requires a quadratic number of calls to an integrality gap verifier.
We study a participatory budgeting problem of aggregating the preferences of agents and dividing a budget over the projects. A budget division solution is a probability distribution over the projects. The main purpose of our study concerns the comparison between the system optimum solution and a fair solution. We are interested in assessing the quality of fair solutions, i.e., in measuring the system efficiency loss under a fair allocation compared to the one that maximizes (egalitarian) social welfare. This indicator is called the price of fairness. We are also interested in the performance of several aggregation rules. Asymptotically tight bounds are provided both for the price of fairness and the efficiency guarantee of aggregation rules.