Do you want to publish a course? Click here

Submodular Maximization with Nearly-optimal Approximation and Adaptivity in Nearly-linear Time

97   0   0.0 ( 0 )
 Added by Alina Ene
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

In this paper, we study the tradeoff between the approximation guarantee and adaptivity for the problem of maximizing a monotone submodular function subject to a cardinality constraint. The adaptivity of an algorithm is the number of sequential rounds of queries it makes to the evaluation oracle of the function, where in every round the algorithm is allowed to make polynomially-many parallel queries. Adaptivity is an important consideration in settings where the objective function is estimated using samples and in applications where adaptivity is the main running time bottleneck. Previous algorithms achieving a nearly-optimal $1 - 1/e - epsilon$ approximation require $Omega(n)$ rounds of adaptivity. In this work, we give the first algorithm that achieves a $1 - 1/e - epsilon$ approximation using $O(ln{n} / epsilon^2)$ rounds of adaptivity. The number of function evaluations and additional running time of the algorithm are $O(n mathrm{poly}(log{n}, 1/epsilon))$.

rate research

Read More

Submodular optimization generalizes many classic problems in combinatorial optimization and has recently found a wide range of applications in machine learning (e.g., feature engineering and active learning). For many large-scale optimization problems, we are often concerned with the adaptivity complexity of an algorithm, which quantifies the number of sequential rounds where polynomially-many independent function evaluations can be executed in parallel. While low adaptivity is ideal, it is not sufficient for a distributed algorithm to be efficient, since in many practical applications of submodular optimization the number of function evaluations becomes prohibitively expensive. Motivated by these applications, we study the adaptivity and query complexity of adaptive submodular optimization. Our main result is a distributed algorithm for maximizing a monotone submodular function with cardinality constraint $k$ that achieves a $(1-1/e-varepsilon)$-approximation in expectation. This algorithm runs in $O(log(n))$ adaptive rounds and makes $O(n)$ calls to the function evaluation oracle in expectation. The approximation guarantee and query complexity are optimal, and the adaptivity is nearly optimal. Moreover, the number of queries is substantially less than in previous works. Last, we extend our results to the submodular cover problem to demonstrate the generality of our algorithm and techniques.
Submodular maximization is a general optimization problem with a wide range of applications in machine learning (e.g., active learning, clustering, and feature selection). In large-scale optimization, the parallel running time of an algorithm is governed by its adaptivity, which measures the number of sequential rounds needed if the algorithm can execute polynomially-many independent oracle queries in parallel. While low adaptivity is ideal, it is not sufficient for an algorithm to be efficient in practice---there are many applications of distributed submodular optimization where the number of function evaluations becomes prohibitively expensive. Motivated by these applications, we study the adaptivity and query complexity of submodular maximization. In this paper, we give the first constant-factor approximation algorithm for maximizing a non-monotone submodular function subject to a cardinality constraint $k$ that runs in $O(log(n))$ adaptive rounds and makes $O(n log(k))$ oracle queries in expectation. In our empirical study, we use three real-world applications to compare our algorithm with several benchmarks for non-monotone submodular maximization. The results demonstrate that our algorithm finds competitive solutions using significantly fewer rounds and queries.
84 - Alina Ene , Huy L. Nguyen 2018
We consider fast algorithms for monotone submodular maximization subject to a matroid constraint. We assume that the matroid is given as input in an explicit form, and the goal is to obtain the best possible running times for important matroids. We develop a new algorithm for a emph{general matroid constraint} with a $1 - 1/e - epsilon$ approximation that achieves a fast running time provided we have a fast data structure for maintaining a maximum weight base in the matroid through a sequence of decrease weight operations. We construct such data structures for graphic matroids and partition matroids, and we obtain the emph{first algorithms} for these classes of matroids that achieve a nearly-optimal, $1 - 1/e - epsilon$ approximation, using a nearly-linear number of function evaluations and arithmetic operations.
122 - Alina Ene , Huy L. Nguyen 2017
We consider the problem of maximizing a monotone submodular function subject to a knapsack constraint. Our main contribution is an algorithm that achieves a nearly-optimal, $1 - 1/e - epsilon$ approximation, using $(1/epsilon)^{O(1/epsilon^4)} n log^2{n}$ function evaluations and arithmetic operations. Our algorithm is impractical but theoretically interesting, since it overcomes a fundamental running time bottleneck of the multilinear extension relaxation framework. This is the main approach for obtaining nearly-optimal approximation guarantees for important classes of constraints but it leads to $Omega(n^2)$ running times, since evaluating the multilinear extension is expensive. Our algorithm maintains a fractional solution with only a constant number of entries that are strictly fractional, which allows us to overcome this obstacle.
97 - Hung Le , Shay Solomon 2021
Let $G = (V,E,w)$ be a weighted undirected graph on $|V| = n$ vertices and $|E| = m$ edges, let $k ge 1$ be any integer, and let $epsilon < 1$ be any parameter. We present the following results on fast constructions of spanners with near-optimal sparsity and lightness, which culminate a long line of work in this area. (By near-optimal we mean optimal under ErdH{o}s girth conjecture and disregarding the $epsilon$-dependencies.) - There are (deterministic) algorithms for constructing $(2k-1)(1+epsilon)$-spanners for $G$ with a near-optimal sparsity of $O(n^{1/k} log(1/epsilon)/epsilon))$. The first algorithm can be implemented in the pointer-machine model within time $O(malpha(m,n) log(1/epsilon)/epsilon) + SORT(m))$, where $alpha( , )$ is the two-parameter inverse-Ackermann function and $SORT(m)$ is the time needed to sort $m$ integers. The second algorithm can be implemented in the WORD RAM model within time $O(m log(1/epsilon)/epsilon))$. - There is a (deterministic) algorithm for constructing a $(2k-1)(1+epsilon)$-spanner for $G$ that achieves a near-optimal bound of $O(n^{1/k}mathrm{poly}(1/epsilon))$ on both sparsity and lightness. This algorithm can be implemented in the pointer-machine model within time $O(malpha(m,n) mathrm{poly}(1/epsilon) + SORT(m))$ and in the WORD RAM model within time $O(m alpha(m,n) mathrm{poly}(1/epsilon))$. The previous fastest constructions of $(2k-1)(1+epsilon)$-spanners with near-optimal sparsity incur a runtime of is $O(min{m(n^{1+1/k}) + nlog n,k n^{2+1/k}})$, even regardless of the lightness. Importantly, the greedy spanner for stretch $2k-1$ has sparsity $O(n^{1/k})$ -- with no $epsilon$-dependence whatsoever, but its runtime is $O(m(n^{1+1/k} + nlog n))$. Moreover, the state-of-the-art lightness bound of any $(2k-1)$-spanner is poor, even regardless of the sparsity and runtime.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا