Do you want to publish a course? Click here

Near-optimal Approximate Discrete and Continuous Submodular Function Minimization

154   0   0.0 ( 0 )
 Added by Brian Axelrod
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

In this paper we provide improved running times and oracle complexities for approximately minimizing a submodular function. Our main result is a randomized algorithm, which given any submodular function defined on $n$-elements with range $[-1, 1]$, computes an $epsilon$-additive approximate minimizer in $tilde{O}(n/epsilon^2)$ oracle evaluations with high probability. This improves over the $tilde{O}(n^{5/3}/epsilon^2)$ oracle evaluation algorithm of Chakrabarty etal~(STOC 2017) and the $tilde{O}(n^{3/2}/epsilon^2)$ oracle evaluation algorithm of Hamoudi etal. Further, we leverage a generalization of this result to obtain efficient algorithms for minimizing a broad class of nonconvex functions. For any function $f$ with domain $[0, 1]^n$ that satisfies $frac{partial^2f}{partial x_i partial x_j} le 0$ for all $i eq j$ and is $L$-Lipschitz with respect to the $L^infty$-norm we give an algorithm that computes an $epsilon$-additive approximate minimizer with $tilde{O}(n cdot mathrm{poly}(L/epsilon))$ function evaluation with high probability.



rate research

Read More

Submodular function minimization (SFM) is a fundamental discrete optimization problem which generalizes many well known problems, has applications in various fields, and can be solved in polynomial time. Owing to applications in computer vision and machine learning, fast SFM algorithms are highly desirable. The current fastest algorithms [Lee, Sidford, Wong, FOCS 2015] run in $O(n^{2}log nMcdottextrm{EO} +n^{3}log^{O(1)}nM)$ time and $O(n^{3}log^{2}ncdot textrm{EO} +n^{4}log^{O(1)}n$) time respectively, where $M$ is the largest absolute value of the function (assuming the range is integers) and $textrm{EO}$ is the time taken to evaluate the function on any set. Although the best known lower bound on the query complexity is only $Omega(n)$, the current shortest non-deterministic proof certifying the optimum value of a function requires $Theta(n^{2})$ function evaluations. The main contribution of this paper are subquadratic SFM algorithms. For integer-valued submodular functions, we give an SFM algorithm which runs in $O(nM^{3}log ncdottextrm{EO})$ time giving the first nearly linear time algorithm in any known regime. For real-valued submodular functions with range in $[-1,1]$, we give an algorithm which in $tilde{O}(n^{5/3}cdottextrm{EO}/varepsilon^{2})$ time returns an $varepsilon$-additive approximate solution. At the heart of it, our algorithms are projected stochastic subgradient descent methods on the Lovasz extension of submodular functions where we crucially exploit submodularity and data structures to obtain fast, i.e. sublinear time subgradient updates. . The latter is crucial for beating the $n^{2}$ bound as we show that algorithms which access only subgradients of the Lovasz extension, and these include the theoretically best algorithms mentioned above, must make $Omega(n)$ subgradient calls (even for functions whose range is ${-1,0,1}$).
This paper bridges discrete and continuous optimization approaches for decomposable submodular function minimization, in both the standard and parametric settings. We provide improved running times for this problem by reducing it to a number of calls to a maximum flow oracle. When each function in the decomposition acts on $O(1)$ elements of the ground set $V$ and is polynomially bounded, our running time is up to polylogarithmic factors equal to that of solving maximum flow in a sparse graph with $O(vert V vert)$ vertices and polynomial integral capacities. We achieve this by providing a simple iterative method which can optimize to high precision any convex function defined on the submodular base polytope, provided we can efficiently minimize it on the base polytope corresponding to the cut function of a certain graph that we construct. We solve this minimization problem by lifting the solutions of a parametric cut problem, which we obtain via a new efficient combinatorial reduction to maximum flow. This reduction is of independent interest and implies some previously unknown bounds for the parametric minimum $s,t$-cut problem in multiple settings.
In this paper we study the fundamental problems of maximizing a continuous non-monotone submodular function over the hypercube, both with and without coordinate-wise concavity. This family of optimization problems has several applications in machine learning, economics, and communication systems. Our main result is the first $frac{1}{2}$-approximation algorithm for continuous submodular function maximization; this approximation factor of $frac{1}{2}$ is the best possible for algorithms that only query the objective function at polynomially many points. For the special case of DR-submodular maximization, i.e. when the submodular functions is also coordinate wise concave along all coordinates, we provide a different $frac{1}{2}$-approximation algorithm that runs in quasilinear time. Both of these results improve upon prior work [Bian et al, 2017, Soma and Yoshida, 2017]. Our first algorithm uses novel ideas such as reducing the guaranteed approximation problem to analyzing a zero-sum game for each coordinate, and incorporates the geometry of this zero-sum game to fix the value at this coordinate. Our second algorithm exploits coordinate-wise concavity to identify a monotone equilibrium condition sufficient for getting the required approximation guarantee, and hunts for the equilibrium point using binary search. We further run experiments to verify the performance of our proposed algorithms in related machine learning applications.
We consider submodular function minimization in the oracle model: given black-box access to a submodular set function $f:2^{[n]}rightarrow mathbb{R}$, find an element of $argmin_S {f(S)}$ using as few queries to $f(cdot)$ as possible. State-of-the-art algorithms succeed with $tilde{O}(n^2)$ queries [LeeSW15], yet the best-known lower bound has never been improved beyond $n$ [Harvey08]. We provide a query lower bound of $2n$ for submodular function minimization, a $3n/2-2$ query lower bound for the non-trivial minimizer of a symmetric submodular function, and a $binom{n}{2}$ query lower bound for the non-trivial minimizer of an asymmetric submodular function. Our $3n/2-2$ lower bound results from a connection between SFM lower bounds and a novel concept we term the cut dimension of a graph. Interestingly, this yields a $3n/2-2$ cut-query lower bound for finding the global mincut in an undirected, weighted graph, but we also prove it cannot yield a lower bound better than $n+1$ for $s$-$t$ mincut, even in a directed, weighted graph.
It has been observed independently by many researchers that the isolating cut lemma of Li and Panigrahi [FOCS 2020] can be easily extended to obtain new algorithms for finding the non-trivial minimizer of a symmetric submodular function and solving the hypergraph minimum cut problem. This note contains these observations.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا