ترغب بنشر مسار تعليمي؟ اضغط هنا

On maximizing a monotone $k$-submodular function under a knapsack constraint

240   0   0.0 ( 0 )
 نشر من قبل Chenhao Wang
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We study the problem of maximizing a monotone $k$-submodular function $f$ under a knapsack constraint, where a $k$-submodular function is a natural generalization of a submodular function to $k$ dimensions. We present a deterministic $(frac12-frac{1}{2e})$-approximation algorithm that evaluates $f$ $O(n^5k^4)$ times.



قيم البحث

اقرأ أيضاً

Constrained submodular maximization problems encompass a wide variety of applications, including personalized recommendation, team formation, and revenue maximization via viral marketing. The massive instances occurring in modern day applications can render existing algorithms prohibitively slow, while frequently, those instances are also inherently stochastic. Focusing on these challenges, we revisit the classic problem of maximizing a (possibly non-monotone) submodular function subject to a knapsack constraint. We present a simple randomized greedy algorithm that achieves a $5.83$ approximation and runs in $O(n log n)$ time, i.e., at least a factor $n$ faster than other state-of-the-art algorithms. The robustness of our approach allows us to further transfer it to a stochastic version of the problem. There, we obtain a $9$-approximation to the best adaptive policy, which is the first constant approximation for non-monotone objectives. Experimental evaluation of our algorithms showcases their improved performance on real and synthetic data.
122 - Alina Ene , Huy L. Nguyen 2017
We consider the problem of maximizing a monotone submodular function subject to a knapsack constraint. Our main contribution is an algorithm that achieves a nearly-optimal, $1 - 1/e - epsilon$ approximation, using $(1/epsilon)^{O(1/epsilon^4)} n log^ 2{n}$ function evaluations and arithmetic operations. Our algorithm is impractical but theoretically interesting, since it overcomes a fundamental running time bottleneck of the multilinear extension relaxation framework. This is the main approach for obtaining nearly-optimal approximation guarantees for important classes of constraints but it leads to $Omega(n^2)$ running times, since evaluating the multilinear extension is expensive. Our algorithm maintains a fractional solution with only a constant number of entries that are strictly fractional, which allows us to overcome this obstacle.
The growing need to deal with massive instances motivates the design of algorithms balancing the quality of the solution with applicability. For the latter, an important measure is the emph{adaptive complexity}, capturing the number of sequential rou nds of parallel computation needed. In this work we obtain the first emph{constant factor} approximation algorithm for non-monotone submodular maximization subject to a knapsack constraint with emph{near-optimal} $O(log n)$ adaptive complexity. Low adaptivity by itself, however, is not enough: one needs to account for the total number of function evaluations (or value queries) as well. Our algorithm asks $tilde{O}(n^2)$ value queries, but can be modified to run with only $tilde{O}(n)$ instead, while retaining a low adaptive complexity of $O(log^2n)$. Besides the above improvement in adaptivity, this is also the first emph{combinatorial} approach with sublinear adaptive complexity for the problem and yields algorithms comparable to the state-of-the-art even for the special cases of cardinality constraints or monotone objectives. Finally, we showcase our algorithms applicability on real-world datasets.
180 - Shahar Dobzinski , Ami Mor 2015
The problem of maximizing a non-negative submodular function was introduced by Feige, Mirrokni, and Vondrak [FOCS07] who provided a deterministic local-search based algorithm that guarantees an approximation ratio of $frac 1 3$, as well as a randomiz ed $frac 2 5$-approximation algorithm. An extensive line of research followed and various algorithms with improving approximation ratios were developed, all of them are randomized. Finally, Buchbinder et al. [FOCS12] presented a randomized $frac 1 2$-approximation algorithm, which is the best possible. This paper gives the first deterministic algorithm for maximizing a non-negative submodular function that achieves an approximation ratio better than $frac 1 3$. The approximation ratio of our algorithm is $frac 2 5$. Our algorithm is based on recursive composition of solutions obtained by the local search algorithm of Feige et al. We show that the $frac 2 5$ approximation ratio can be guaranteed when the recursion depth is $2$, and leave open the question of whether the approximation ratio improves as the recursion depth increases.
A $k$-submodular function is a function that given $k$ disjoint subsets outputs a value that is submodular in every orthant. In this paper, we provide a new framework for $k$-submodular maximization problems, by relaxing the optimization to the conti nuous space with the multilinear extension of $k$-submodular functions and a variant of pipage rounding that recovers the discrete solution. The multilinear extension introduces new techniques to analyze and optimize $k$-submodular functions. When the function is monotone, we propose almost $frac{1}{2}$-approximation algorithms for unconstrained maximization and maximization under total size and knapsack constraints. For unconstrained monotone and non-monotone maximization, we propose an algorithm that is almost as good as any combinatorial algorithm based on Iwata, Tanigawa, and Yoshidas meta-framework ($frac{k}{2k-1}$-approximation for the monotone case and $frac{k^2+1}{2k^2+1}$-approximation for the non-monotone case).
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا