Do you want to publish a course? Click here

On Near-Linear-Time Algorithms for Dense Subset Sum

71   0   0.0 ( 0 )
 Added by Karl Bringmann
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

In the Subset Sum problem we are given a set of $n$ positive integers $X$ and a target $t$ and are asked whether some subset of $X$ sums to $t$. Natural parameters for this problem that have been studied in the literature are $n$ and $t$ as well as the maximum input number $rm{mx}_X$ and the sum of all input numbers $Sigma_X$. In this paper we study the dense case of Subset Sum, where all these parameters are polynomial in $n$. In this regime, standard pseudo-polynomial algorithms solve Subset Sum in polynomial time $n^{O(1)}$. Our main question is: When can dense Subset Sum be solved in near-linear time $tilde{O}(n)$? We provide an essentially complete dichotomy by designing improved algorithms and proving conditional lower bounds, thereby determining essentially all settings of the parameters $n,t,rm{mx}_X,Sigma_X$ for which dense Subset Sum is in time $tilde{O}(n)$. For notational convenience we assume without loss of generality that $t ge rm{mx}_X$ (as larger numbers can be ignored) and $t le Sigma_X/2$ (using symmetry). Then our dichotomy reads as follows: - By reviving and improving an additive-combinatorics-based approach by Galil and Margalit [SICOMP91], we show that Subset Sum is in near-linear time $tilde{O}(n)$ if $t gg rm{mx}_X Sigma_X/n^2$. - We prove a matching conditional lower bound: If Subset Sum is in near-linear time for any setting with $t ll rm{mx}_X Sigma_X/n^2$, then the Strong Exponential Time Hypothesis and the Strong k-Sum Hypothesis fail. We also generalize our algorithm from sets to multi-sets, albeit with non-matching upper and lower bounds.



rate research

Read More

In the classical Subset Sum problem we are given a set $X$ and a target $t$, and the task is to decide whether there exists a subset of $X$ which sums to $t$. A recent line of research has resulted in $tilde{O}(t)$-time algorithms, which are (near-)optimal under popular complexity-theoretic assumptions. On the other hand, the standard dynamic programming algorithm runs in time $O(n cdot |mathcal{S}(X,t)|)$, where $mathcal{S}(X,t)$ is the set of all subset sums of $X$ that are smaller than $t$. Furthermore, all known pseudopolynomial algorithms actually solve a stronger task, since they actually compute the whole set $mathcal{S}(X,t)$. As the aforementioned two running times are incomparable, in this paper we ask whether one can achieve the best of both worlds: running time $tilde{O}(|mathcal{S}(X,t)|)$. In particular, we ask whether $mathcal{S}(X,t)$ can be computed in near-linear time in the output-size. Using a diverse toolkit containing techniques such as color coding, sparse recovery, and sumset estimates, we make considerable progress towards this question and design an algorithm running in time $tilde{O}(|mathcal{S}(X,t)|^{4/3})$. Central to our approach is the study of top-$k$-convolution, a natural problem of independent interest: given sparse polynomials with non-negative coefficients, compute the lowest $k$ non-zero monomials of their product. We design an algorithm running in time $tilde{O}(k^{4/3})$, by a combination of sparse convolution and sumset estimates considered in Additive Combinatorics. Moreover, we provide evidence that going beyond some of the barriers we have faced requires either an algorithmic breakthrough or possibly new techniques from Additive Combinatorics on how to pass from information on restricted sumsets to information on unrestricted sumsets.
323 - Zhengjun Cao , Lihua Liu 2018
Given a set (or multiset) S of n numbers and a target number t, the subset sum problem is to decide if there is a subset of S that sums up to t. There are several methods for solving this problem, including exhaustive search, divide-and-conquer method, and Bellmans dynamic programming method. However, none of them could generate universal and light code. In this paper, we present a new deterministic algorithm based on a novel data arrangement, which could generate such code and return all solutions. If n is small enough, it is efficient for usual purpose. We also present a probabilistic version with one-sided error and a greedy algorithm which could generate a solution with minimized variance.
A skew-symmetric graph $(D=(V,A),sigma)$ is a directed graph $D$ with an involution $sigma$ on the set of vertices and arcs. In this paper, we introduce a separation problem, $d$-Skew-Symmetric Multicut, where we are given a skew-symmetric graph $D$, a family of $cal T$ of $d$-sized subsets of vertices and an integer $k$. The objective is to decide if there is a set $Xsubseteq A$ of $k$ arcs such that every set $J$ in the family has a vertex $v$ such that $v$ and $sigma(v)$ are in different connected components of $D=(V,Asetminus (Xcup sigma(X))$. In this paper, we give an algorithm for this problem which runs in time $O((4d)^{k}(m+n+ell))$, where $m$ is the number of arcs in the graph, $n$ the number of vertices and $ell$ the length of the family given in the input. Using our algorithm, we show that Almost 2-SAT has an algorithm with running time $O(4^kk^4ell)$ and we obtain algorithms for {sc Odd Cycle Transversal} and {sc Edge Bipartization} which run in time $O(4^kk^4(m+n))$ and $O(4^kk^5(m+n))$ respectively. This resolves an open problem posed by Reed, Smith and Vetta [Operations Research Letters, 2003] and improves upon the earlier almost linear time algorithm of Kawarabayashi and Reed [SODA, 2010]. We also show that Deletion q-Horn Backdoor Set Detection is a special case of 3-Skew-Symmetric Multicut, giving us an algorithm for Deletion q-Horn Backdoor Set Detection which runs in time $O(12^kk^5ell)$. This gives the first fixed-parameter tractable algorithm for this problem answering a question posed in a paper by a superset of the authors [STACS, 2013]. Using this result, we get an algorithm for Satisfiability which runs in time $O(12^kk^5ell)$ where $k$ is the size of the smallest q-Horn deletion backdoor set, with $ell$ being the length of the input formula.
We consider the problem of transforming a set of elements into another by a sequence of elementary edit operations, namely substitutions, removals and insertions of elements. Each possible edit operation is penalized by a non-negative cost and the cost of a transformation is measured by summing the costs of its operations. A solution to this problem consists in defining a transformation having a minimal cost, among all possible transformations. To compute such a solution, the classical approach consists in representing removal and insertion operations by augmenting the two sets so that they get the same size. This allows to express the problem as a linear sum assignment problem (LSAP), which thus finds an optimal bijection (or permutation, perfect matching) between the two augmented sets. While the LSAP is known to be efficiently solvable in polynomial time complexity, for instance with the Hungarian algorithm, useless time and memory are spent to treat the elements which have been added to the initial sets. In this report, we show that the problem can be formalized as an extension of the LSAP which considers only one additional element in each set to represent removal and insertion operations. A solution to the problem is no longer represented as a bijection between the two augmented sets. We show that the considered problem is a binary linear program (BLP) very close to the LSAP. While it can be solved by any BLP solver, we propose an adaptation of the Hungarian algorithm which improves the time and memory complexities previously obtained by the approach based on the LSAP. The importance of the improvement increases as the size of the two sets and their absolute difference increase. Based on the analysis of the problem presented in this report, other classical algorithms can be adapted.
We show that Nederlofs algorithm [Information Processing Letters, 118 (2017), 15-16] for constructing a proof that the number of subsets summing to a particular integer equals a claimed quantity is flawed because: 1) its consistence is not kept; 2) the proposed recurrence formula is incorrect.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا