No Arabic abstract
The Exact Satisfiability problem, XSAT, is defined as the problem of finding a satisfying assignment to a formula in CNF such that there is exactly one literal in each clause assigned to be 1 and the other literals in the same clause are set to 0. If we restrict the length of each clause to be at most 3 literals, then it is known as the X3SAT problem. In this paper, we consider the problem of counting the number of satisfying assignments to the X3SAT problem, which is also known as #X3SAT. The current state of the art exact algorithm to solve #X3SAT is given by Dahllof, Jonsson and Beigel and runs in $O(1.1487^n)$, where $n$ is the number of variables in the formula. In this paper, we propose an exact algorithm for the #X3SAT problem that runs in $O(1.1120^n)$ with very few branching cases to consider, by using a result from Monien and Preis to give us a bisection width for graphs with at most degree 3.
X3SAT is the problem of whether one can satisfy a given set of clauses with up to three literals such that in every clause, exactly one literal is true and the others are false. A related question is to determine the maximal Hamming distance between two solutions of the instance. Dahllof provided an algorithm for Maximum Hamming Distance XSAT, which is more complicated than the same problem for X3SAT, with a runtime of $O(1.8348^n)$; Fu, Zhou and Yin considered Maximum Hamming Distance for X3SAT and found for this problem an algorithm with runtime $O(1.6760^n)$. In this paper, we propose an algorithm in $O(1.3298^n)$ time to solve the Max Hamming Distance X3SAT problem; the algorithm actually counts for each $k$ the number of pairs of solutions which have Hamming Distance $k$.
Motivated by recent Linear Programming solvers, we design dynamic data structures for maintaining the inverse of an $ntimes n$ real matrix under $textit{low-rank}$ updates, with polynomially faster amortized running time. Our data structure is based on a recursive application of the Woodbury-Morrison identity for implementing $textit{cascading}$ low-rank updates, combined with recent sketching technology. Our techniques and amortized analysis of multi-level partial updates, may be of broader interest to dynamic matrix problems. This data structure leads to the fastest known LP solver for general (dense) linear programs, improving the running time of the recent algorithms of (Cohen et al.19, Lee et al.19, Brand20) from $O^*(n^{2+ max{frac{1}{6}, omega-2, frac{1-alpha}{2}}})$ to $O^*(n^{2+max{frac{1}{18}, omega-2, frac{1-alpha}{2}}})$, where $omega$ and $alpha$ are the fast matrix multiplication exponent and its dual. Hence, under the common belief that $omega approx 2$ and $alpha approx 1$, our LP solver runs in $O^*(n^{2.055})$ time instead of $O^*(n^{2.16})$.
Ground state counting plays an important role in several applications in science and engineering, from estimating residual entropy in physical systems, to bounding engineering reliability and solving combinatorial counting problems. While quantum algorithms such as adiabatic quantum optimization (AQO) and quantum approximate optimization (QAOA) can minimize Hamiltonians, they are inadequate for counting ground states. We modify AQO and QAOA to count the ground states of arbitrary classical spin Hamiltonians, including counting ground states with arbitrary nonnegative weights attached to them. As a concrete example, we show how our method can be used to count the weighted fraction of edge covers on graphs, with user-specified confidence on the relative error of the weighted count, in the asymptotic limit of large graphs. We find the asymptotic computational time complexity of our algorithms, via analytical predictions for AQO and numerical calculations for QAOA, and compare with the classical optimal Monte Carlo algorithm (OMCS), as well as a modified Grovers algorithm. We show that for large problem instances with small weights on the ground states, AQO does not have a quantum speedup over OMCS for a fixed error and confidence, but QAOA has a sub-quadratic speedup on a broad class of numerically simulated problems. Our work is an important step in approaching general ground-state counting problems beyond those that can be solved with Grovers algorithm. It offers algorithms that can employ noisy intermediate-scale quantum devices for solving ground state counting problems on small instances, which can help in identifying more problem classes with quantum speedups.
Semidefinite programs (SDPs) are a fundamental class of optimization problems with important recent applications in approximation algorithms, quantum complexity, robust learning, algorithmic rounding, and adversarial deep learning. This paper presents a faster interior point method to solve generic SDPs with variable size $n times n$ and $m$ constraints in time begin{align*} widetilde{O}(sqrt{n}( mn^2 + m^omega + n^omega) log(1 / epsilon) ), end{align*} where $omega$ is the exponent of matrix multiplication and $epsilon$ is the relative accuracy. In the predominant case of $m geq n$, our runtime outperforms that of the previous fastest SDP solver, which is based on the cutting plane method of Jiang, Lee, Song, and Wong [JLSW20]. Our algorithms runtime can be naturally interpreted as follows: $widetilde{O}(sqrt{n} log (1/epsilon))$ is the number of iterations needed for our interior point method, $mn^2$ is the input size, and $m^omega + n^omega$ is the time to invert the Hessian and slack matrix in each iteration. These constitute natural barriers to further improving the runtime of interior point methods for solving generic SDPs.
Many data we collect today are in tabular form, with rows as records and columns as attributes associated with each record. Understanding the structural relationship in tabular data can greatly facilitate the data science process. Traditionally, much of this relational information is stored in table schema and maintained by its creators, usually domain experts. In this paper, we develop automated methods to uncover deep relationships in a single data table without expert or domain knowledge. Our method can decompose a data table into layers of smaller tables, revealing its deep structure. The key to our approach is a computationally lightweight forward addition algorithm that we developed to recursively extract the functional dependencies between table columns that are scalable to tables with many columns. With our solution, data scientists will be provided with automatically generated, data-driven insights when exploring new data sets.