No Arabic abstract
This paper proves that there does not exist a polynomial-time algorithm to the the subset sum problem. As this problem is in NP, the result implies that the class P of problems admitting polynomial-time algorithms does not equal the class NP of problems admitting nondeterministic polynomial-time algorithms.
Given a set (or multiset) S of n numbers and a target number t, the subset sum problem is to decide if there is a subset of S that sums up to t. There are several methods for solving this problem, including exhaustive search, divide-and-conquer method, and Bellmans dynamic programming method. However, none of them could generate universal and light code. In this paper, we present a new deterministic algorithm based on a novel data arrangement, which could generate such code and return all solutions. If n is small enough, it is efficient for usual purpose. We also present a probabilistic version with one-sided error and a greedy algorithm which could generate a solution with minimized variance.
In the Subset Sum problem we are given a set of $n$ positive integers $X$ and a target $t$ and are asked whether some subset of $X$ sums to $t$. Natural parameters for this problem that have been studied in the literature are $n$ and $t$ as well as the maximum input number $rm{mx}_X$ and the sum of all input numbers $Sigma_X$. In this paper we study the dense case of Subset Sum, where all these parameters are polynomial in $n$. In this regime, standard pseudo-polynomial algorithms solve Subset Sum in polynomial time $n^{O(1)}$. Our main question is: When can dense Subset Sum be solved in near-linear time $tilde{O}(n)$? We provide an essentially complete dichotomy by designing improved algorithms and proving conditional lower bounds, thereby determining essentially all settings of the parameters $n,t,rm{mx}_X,Sigma_X$ for which dense Subset Sum is in time $tilde{O}(n)$. For notational convenience we assume without loss of generality that $t ge rm{mx}_X$ (as larger numbers can be ignored) and $t le Sigma_X/2$ (using symmetry). Then our dichotomy reads as follows: - By reviving and improving an additive-combinatorics-based approach by Galil and Margalit [SICOMP91], we show that Subset Sum is in near-linear time $tilde{O}(n)$ if $t gg rm{mx}_X Sigma_X/n^2$. - We prove a matching conditional lower bound: If Subset Sum is in near-linear time for any setting with $t ll rm{mx}_X Sigma_X/n^2$, then the Strong Exponential Time Hypothesis and the Strong k-Sum Hypothesis fail. We also generalize our algorithm from sets to multi-sets, albeit with non-matching upper and lower bounds.
The subset sum problem is a typical NP-complete problem that is hard to solve efficiently in time due to the intrinsic superpolynomial-scaling property. Increasing the problem size results in a vast amount of time consuming in conventionally available computers. Photons possess the unique features of extremely high propagation speed, weak interaction with environment and low detectable energy level, therefore can be a promising candidate to meet the challenge by constructing an a photonic computer computer. However, most of optical computing schemes, like Fourier transformation, require very high operation precision and are hard to scale up. Here, we present a chip built-in photonic computer to efficiently solve the subset sum problem. We successfully map the problem into a waveguide network in three dimensions by using femtosecond laser direct writing technique. We show that the photons are able to sufficiently dissipate into the networks and search all the possible paths for solutions in parallel. In the case of successive primes the proposed approach exhibits a dominant superiority in time consumption even compared with supercomputers. Our results confirm the ability of light to realize a complicated computational function that is intractable with conventional computers, and suggest the subset sum problem as a good benchmarking platform for the race between photonic and conventional computers on the way towards photonic supremacy.
We construct a function for almost-complex Riemannian manifolds. Non-vanishing of the function for the almost-complex structure implies the almost-complex structure is not integrable. Therefore the constructed function is an obstruction for the existence of complex structures from the almost-complex structure. It is a function, not a tensor, so it is easier to work with.
The input of the Test Cover problem consists of a set $V$ of vertices, and a collection ${cal E}={E_1,..., E_m}$ of distinct subsets of $V$, called tests. A test $E_q$ separates a pair $v_i,v_j$ of vertices if $|{v_i,v_j}cap E_q|=1.$ A subcollection ${cal T}subseteq {cal E}$ is a test cover if each pair $v_i,v_j$ of distinct vertices is separated by a test in ${cal T}$. The objective is to find a test cover of minimum cardinality, if one exists. This problem is NP-hard. We consider two parameterizations the Test Cover problem with parameter $k$: (a) decide whether there is a test cover with at most $k$ tests, (b) decide whether there is a test cover with at most $|V|-k$ tests. Both parameterizations are known to be fixed-parameter tractable. We prove that none have a polynomial size kernel unless $NPsubseteq coNP/poly$. Our proofs use the cross-composition method recently introduced by Bodlaender et al. (2011) and parametric duality introduced by Chen et al. (2005). The result for the parameterization (a) was an open problem (private communications with Henning Fernau and Jiong Guo, Jan.-Feb. 2012). We also show that the parameterization (a) admits a polynomial size kernel if the size of each test is upper-bounded by a constant.