Do you want to publish a course? Click here

Nash Social Welfare, Matrix Permanent, and Stable Polynomials

102   0   0.0 ( 0 )
 Added by Mohit Singh
 Publication date 2016
and research's language is English




Ask ChatGPT about the research

We study the problem of allocating $m$ items to $n$ agents subject to maximizing the Nash social welfare (NSW) objective. We write a novel convex programming relaxation for this problem, and we show that a simple randomized rounding algorithm gives a $1/e$ approximation factor of the objective. Our main technical contribution is an extension of Gurvitss lower bound on the coefficient of the square-free monomial of a degree $m$-homogeneous stable polynomial on $m$ variables to all homogeneous polynomials. We use this extension to analyze the expected welfare of the allocation returned by our randomized rounding algorithm.



rate research

Read More

We consider the problem of approximating maximum Nash social welfare (NSW) while allocating a set of indivisible items to $n$ agents. The NSW is a popular objective that provides a balanced tradeoff between the often conflicting requirements of fairness and efficiency, defined as the weighted geometric mean of agents valuations. For the symmetric additive case of the problem, where agents have the same weight with additive valuations, the first constant-factor approximation algorithm was obtained in 2015. This led to a flurry of work obtaining constant-factor approximation algorithms for the symmetric case under mild generalizations of additive, and $O(n)$-approximation algorithms for more general valuations and for the asymmetric case. In this paper, we make significant progress towards both symmetric and asymmetric NSW problems. We present the first constant-factor approximation algorithm for the symmetric case under Rado valuations. Rado valuations form a general class of valuation functions that arise from maximum cost independent matching problems, including as special cases assignment (OXS) valuations and weighted matroid rank functions. Furthermore, our approach also gives the first constant-factor approximation algorithm for the asymmetric case under Rado valuations, provided that the maximum ratio between the weights is bounded by a constant.
Recently Cole and Gkatzelis gave the first constant factor approximation algorithm for the problem of allocating indivisible items to agents, under additive valuations, so as to maximize the Nash Social Welfare. We give constant factor algorithms for a substantial generalization of their problem -- to the case of separable, piecewise-linear concave utility functions. We give two such algorithms, the first using market equilibria and the second using the theory of stable polynomials. In AGT, there is a paucity of methods for the design of mechanisms for the allocation of indivisible goods and the result of Cole and Gkatzelis seemed to be taking a major step towards filling this gap. Our result can be seen as another step in this direction.
We consider the problem of allocating a set of divisible goods to $N$ agents in an online manner, aiming to maximize the Nash social welfare, a widely studied objective which provides a balance between fairness and efficiency. The goods arrive in a sequence of $T$ periods and the value of each agent for a good is adversarially chosen when the good arrives. We first observe that no online algorithm can achieve a competitive ratio better than the trivial $O(N)$, unless it is given additional information about the agents values. Then, in line with the emerging area of algorithms with predictions, we consider a setting where for each agent, the online algorithm is only given a prediction of her monopolist utility, i.e., her utility if all goods were given to her alone (corresponding to the sum of her values over the $T$ periods). Our main result is an online algorithm whose competitive ratio is parameterized by the multiplicative errors in these predictions. The algorithm achieves a competitive ratio of $O(log N)$ and $O(log T)$ if the predictions are perfectly accurate. Moreover, the competitive ratio degrades smoothly with the errors in the predictions, and is surprisingly robust: the logarithmic competitive ratio holds even if the predictions are very inaccurate. We complement this positive result by showing that our bounds are essentially tight: no online algorithm, even if provided with perfectly accurate predictions, can achieve a competitive ratio of $O(log^{1-epsilon} N)$ or $O(log^{1-epsilon} T)$ for any constant $epsilon>0$.
165 - Dehua Cheng , Yu Cheng , Yan Liu 2015
We consider a fundamental algorithmic question in spectral graph theory: Compute a spectral sparsifier of random-walk matrix-polynomial $$L_alpha(G)=D-sum_{r=1}^dalpha_rD(D^{-1}A)^r$$ where $A$ is the adjacency matrix of a weighted, undirected graph, $D$ is the diagonal matrix of weighted degrees, and $alpha=(alpha_1...alpha_d)$ are nonnegative coefficients with $sum_{r=1}^dalpha_r=1$. Recall that $D^{-1}A$ is the transition matrix of random walks on the graph. The sparsification of $L_alpha(G)$ appears to be algorithmically challenging as the matrix power $(D^{-1}A)^r$ is defined by all paths of length $r$, whose precise calculation would be prohibitively expensive. In this paper, we develop the first nearly linear time algorithm for this sparsification problem: For any $G$ with $n$ vertices and $m$ edges, $d$ coefficients $alpha$, and $epsilon > 0$, our algorithm runs in time $O(d^2mlog^2n/epsilon^{2})$ to construct a Laplacian matrix $tilde{L}=D-tilde{A}$ with $O(nlog n/epsilon^{2})$ non-zeros such that $tilde{L}approx_{epsilon}L_alpha(G)$. Matrix polynomials arise in mathematical analysis of matrix functions as well as numerical solutions of matrix equations. Our work is particularly motivated by the algorithmic problems for speeding up the classic Newtons method in applications such as computing the inverse square-root of the precision matrix of a Gaussian random field, as well as computing the $q$th-root transition (for $qgeq1$) in a time-reversible Markov model. The key algorithmic step for both applications is the construction of a spectral sparsifier of a constant degree random-walk matrix-polynomials introduced by Newtons method. Our algorithm can also be used to build efficient data structures for effective resistances for multi-step time-reversible Markov models, and we anticipate that it could be useful for other tasks in network analysis.
144 - Dorna Abdolazimi , Kuikui Liu , 2021
We show that the natural Glauber dynamics mixes rapidly and generates a random proper edge-coloring of a graph with maximum degree $Delta$ whenever the number of colors is at least $qgeq (frac{10}{3} + epsilon)Delta$, where $epsilon>0$ is arbitrary and the maximum degree satisfies $Delta geq C$ for a constant $C = C(epsilon)$ depending only on $epsilon$. For edge-colorings, this improves upon prior work cite{Vig99, CDMPP19} which show rapid mixing when $qgeq (frac{11}{3}-epsilon_0 ) Delta$, where $epsilon_0 approx 10^{-5}$ is a small fixed constant. At the heart of our proof, we establish a matrix trickle-down theorem, generalizing Oppenheims influential result, as a new technique to prove that a high dimensional simplical complex is a local spectral expander.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا