Do you want to publish a course? Click here

A Nearly Linear-Time PTAS for Explicit Fractional Packing and Covering Linear Programs

378   0   0.0 ( 0 )
 Added by Neal E. Young
 Publication date 2013
and research's language is English




Ask ChatGPT about the research

We give an approximation algorithm for packing and covering linear programs (linear programs with non-negative coefficients). Given a constraint matrix with n non-zeros, r rows, and c columns, the algorithm computes feasible primal and dual solutions whose costs are within a factor of 1+eps of the optimal cost in time O((r+c)log(n)/eps^2 + n).



rate research

Read More

212 - Neal E. Young 2014
We describe the first nearly linear-time approximation algorithms for explicitly given mixed packing/covering linear programs, and for (non-metric) fractional facility location. We also describe the first parallel algorithms requiring only near-linear total work and finishing in polylog time. The algorithms compute $(1+epsilon)$-approximate solutions in time (and work) $O^*(N/epsilon^2)$, where $N$ is the number of non-zeros in the constraint matrix. For facility location, $N$ is the number of eligible client/facility pairs.
In this paper we provide an $tilde{O}(nd+d^{3})$ time randomized algorithm for solving linear programs with $d$ variables and $n$ constraints with high probability. To obtain this result we provide a robust, primal-dual $tilde{O}(sqrt{d})$-iteration interior point method inspired by the methods of Lee and Sidford (2014, 2019) and show how to efficiently implement this method using new data-structures based on heavy-hitters, the Johnson-Lindenstrauss lemma, and inverse maintenance. Interestingly, we obtain this running time without using fast matrix multiplication and consequently, barring a major advance in linear system solving, our running time is near optimal for solving dense linear programs among algorithms that do not use fast matrix multiplication.
96 - Alina Ene , Huy L. Nguyen 2018
In this paper, we study the tradeoff between the approximation guarantee and adaptivity for the problem of maximizing a monotone submodular function subject to a cardinality constraint. The adaptivity of an algorithm is the number of sequential rounds of queries it makes to the evaluation oracle of the function, where in every round the algorithm is allowed to make polynomially-many parallel queries. Adaptivity is an important consideration in settings where the objective function is estimated using samples and in applications where adaptivity is the main running time bottleneck. Previous algorithms achieving a nearly-optimal $1 - 1/e - epsilon$ approximation require $Omega(n)$ rounds of adaptivity. In this work, we give the first algorithm that achieves a $1 - 1/e - epsilon$ approximation using $O(ln{n} / epsilon^2)$ rounds of adaptivity. The number of function evaluations and additional running time of the algorithm are $O(n mathrm{poly}(log{n}, 1/epsilon))$.
122 - Alina Ene , Huy L. Nguyen 2017
We consider the problem of maximizing a monotone submodular function subject to a knapsack constraint. Our main contribution is an algorithm that achieves a nearly-optimal, $1 - 1/e - epsilon$ approximation, using $(1/epsilon)^{O(1/epsilon^4)} n log^2{n}$ function evaluations and arithmetic operations. Our algorithm is impractical but theoretically interesting, since it overcomes a fundamental running time bottleneck of the multilinear extension relaxation framework. This is the main approach for obtaining nearly-optimal approximation guarantees for important classes of constraints but it leads to $Omega(n^2)$ running times, since evaluating the multilinear extension is expensive. Our algorithm maintains a fractional solution with only a constant number of entries that are strictly fractional, which allows us to overcome this obstacle.
97 - Hung Le , Shay Solomon 2021
Let $G = (V,E,w)$ be a weighted undirected graph on $|V| = n$ vertices and $|E| = m$ edges, let $k ge 1$ be any integer, and let $epsilon < 1$ be any parameter. We present the following results on fast constructions of spanners with near-optimal sparsity and lightness, which culminate a long line of work in this area. (By near-optimal we mean optimal under ErdH{o}s girth conjecture and disregarding the $epsilon$-dependencies.) - There are (deterministic) algorithms for constructing $(2k-1)(1+epsilon)$-spanners for $G$ with a near-optimal sparsity of $O(n^{1/k} log(1/epsilon)/epsilon))$. The first algorithm can be implemented in the pointer-machine model within time $O(malpha(m,n) log(1/epsilon)/epsilon) + SORT(m))$, where $alpha( , )$ is the two-parameter inverse-Ackermann function and $SORT(m)$ is the time needed to sort $m$ integers. The second algorithm can be implemented in the WORD RAM model within time $O(m log(1/epsilon)/epsilon))$. - There is a (deterministic) algorithm for constructing a $(2k-1)(1+epsilon)$-spanner for $G$ that achieves a near-optimal bound of $O(n^{1/k}mathrm{poly}(1/epsilon))$ on both sparsity and lightness. This algorithm can be implemented in the pointer-machine model within time $O(malpha(m,n) mathrm{poly}(1/epsilon) + SORT(m))$ and in the WORD RAM model within time $O(m alpha(m,n) mathrm{poly}(1/epsilon))$. The previous fastest constructions of $(2k-1)(1+epsilon)$-spanners with near-optimal sparsity incur a runtime of is $O(min{m(n^{1+1/k}) + nlog n,k n^{2+1/k}})$, even regardless of the lightness. Importantly, the greedy spanner for stretch $2k-1$ has sparsity $O(n^{1/k})$ -- with no $epsilon$-dependence whatsoever, but its runtime is $O(m(n^{1+1/k} + nlog n))$. Moreover, the state-of-the-art lightness bound of any $(2k-1)$-spanner is poor, even regardless of the sparsity and runtime.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا