ترغب بنشر مسار تعليمي؟ اضغط هنا

A Refined Analysis of Submodular Greedy

65   0   0.0 ( 0 )
 نشر من قبل Ariel Kulik
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Many algorithms for maximizing a monotone submodular function subject to a knapsack constraint rely on the natural greedy heuristic. We present a novel refined analysis of this greedy heuristic which enables us to: $(1)$ reduce the enumeration in the tight $(1-e^{-1})$-approximation of [Sviridenko 04] from subsets of size three to two; $(2)$ present an improved upper bound of $0.42945$ for the classic algorithm which returns the better between a single element and the output of the greedy heuristic.



قيم البحث

اقرأ أيضاً

We study the recently introduced idea of worst-case sensitivity for monotone submodular maximization with cardinality constraint $k$, which captures the degree to which the output argument changes on deletion of an element in the input. We find that for large classes of algorithms that non-trivial sensitivity of $o(k)$ is not possible, even with bounded curvature, and that these results also hold in the distributed framework. However, we also show that in the regime $k = Omega(n)$ that we can obtain $O(1)$ sensitivity for sufficiently low curvature.
This paper describes a simple greedy D-approximation algorithm for any covering problem whose objective function is submodular and non-decreasing, and whose feasible region can be expressed as the intersection of arbitrary (closed upwards) covering c onstraints, each of which constrains at most D variables of the problem. (A simple example is Vertex Cover, with D = 2.) The algorithm generalizes previous approximation algorithms for fundamental covering problems and online paging and caching problems.
The greedy strategy is an approximation algorithm to solve optimization problems arising in decision making with multiple actions. How good is the greedy strategy compared to the optimal solution? In this survey, we mainly consider two classes of opt imization problems where the objective function is submodular. The first is set submodular optimization, which is to choose a set of actions to optimize a set submodular objective function, and the second is string submodular optimization, which is to choose an ordered set of actions to optimize a string submodular function. Our emphasis here is on performance bounds for the greedy strategy in submodular optimization problems. Specifically, we review performance bounds for the greedy strategy, more general and improved bounds in terms of curvature, performance bounds for the batched greedy strategy, and performance bounds for Nash equilibria.
181 - Shahar Dobzinski , Ami Mor 2015
The problem of maximizing a non-negative submodular function was introduced by Feige, Mirrokni, and Vondrak [FOCS07] who provided a deterministic local-search based algorithm that guarantees an approximation ratio of $frac 1 3$, as well as a randomiz ed $frac 2 5$-approximation algorithm. An extensive line of research followed and various algorithms with improving approximation ratios were developed, all of them are randomized. Finally, Buchbinder et al. [FOCS12] presented a randomized $frac 1 2$-approximation algorithm, which is the best possible. This paper gives the first deterministic algorithm for maximizing a non-negative submodular function that achieves an approximation ratio better than $frac 1 3$. The approximation ratio of our algorithm is $frac 2 5$. Our algorithm is based on recursive composition of solutions obtained by the local search algorithm of Feige et al. We show that the $frac 2 5$ approximation ratio can be guaranteed when the recursion depth is $2$, and leave open the question of whether the approximation ratio improves as the recursion depth increases.
In this paper we study the fundamental problems of maximizing a continuous non-monotone submodular function over the hypercube, both with and without coordinate-wise concavity. This family of optimization problems has several applications in machine learning, economics, and communication systems. Our main result is the first $frac{1}{2}$-approximation algorithm for continuous submodular function maximization; this approximation factor of $frac{1}{2}$ is the best possible for algorithms that only query the objective function at polynomially many points. For the special case of DR-submodular maximization, i.e. when the submodular functions is also coordinate wise concave along all coordinates, we provide a different $frac{1}{2}$-approximation algorithm that runs in quasilinear time. Both of these results improve upon prior work [Bian et al, 2017, Soma and Yoshida, 2017]. Our first algorithm uses novel ideas such as reducing the guaranteed approximation problem to analyzing a zero-sum game for each coordinate, and incorporates the geometry of this zero-sum game to fix the value at this coordinate. Our second algorithm exploits coordinate-wise concavity to identify a monotone equilibrium condition sufficient for getting the required approximation guarantee, and hunts for the equilibrium point using binary search. We further run experiments to verify the performance of our proposed algorithms in related machine learning applications.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا