ترغب بنشر مسار تعليمي؟ اضغط هنا

A high-performance analog Max-SAT solver and its application to Ramsey numbers

161   0   0.0 ( 0 )
 نشر من قبل Zoltan Toroczkai
 تاريخ النشر 2018
والبحث باللغة English




اسأل ChatGPT حول البحث

We introduce a continuous-time analog solver for MaxSAT, a quintessential class of NP-hard discrete optimization problems, where the task is to find a truth assignment for a set of Boolean variables satisfying the maximum number of given logical constraints. We show that the scaling of an invariant of the solvers dynamics, the escape rate, as function of the number of unsatisfied clauses can predict the global optimum value, often well before reaching the corresponding state. We demonstrate the performance of the solver on hard MaxSAT competition problems. We then consider the two-color Ramsey number $R(m,m)$ problem, translate it to SAT, and apply our algorithm to the still unknown $R(5,5)$. We find edge colorings without monochromatic 5-cliques for complete graphs up to 42 vertices, while on 43 vertices we find colorings with only two monochromatic 5-cliques, the best coloring found so far, supporting the conjecture that $R(5,5) = 43$.



قيم البحث

اقرأ أيضاً

MAX NAE-SAT is a natural optimization problem, closely related to its better-known relative MAX SAT. The approximability status of MAX NAE-SAT is almost completely understood if all clauses have the same size $k$, for some $kge 2$. We refer to this p roblem as MAX NAE-${k}$-SAT. For $k=2$, it is essentially the celebrated MAX CUT problem. For $k=3$, it is related to the MAX CUT problem in graphs that can be fractionally covered by triangles. For $kge 4$, it is known that an approximation ratio of $1-frac{1}{2^{k-1}}$, obtained by choosing a random assignment, is optimal, assuming $P e NP$. For every $kge 2$, an approximation ratio of at least $frac{7}{8}$ can be obtained for MAX NAE-${k}$-SAT. There was some hope, therefore, that there is also a $frac{7}{8}$-approximation algorithm for MAX NAE-SAT, where clauses of all sizes are allowed simultaneously. Our main result is that there is no $frac{7}{8}$-approximation algorithm for MAX NAE-SAT, assuming the unique games conjecture (UGC). In fact, even for almost satisfiable instances of MAX NAE-${3,5}$-SAT (i.e., MAX NAE-SAT where all clauses have size $3$ or $5$), the best approximation ratio that can be achieved, assuming UGC, is at most $frac{3(sqrt{21}-4)}{2}approx 0.8739$. Using calculus of variations, we extend the analysis of ODonnell and Wu for MAX CUT to MAX NAE-${3}$-SAT. We obtain an optimal algorithm, assuming UGC, for MAX NAE-${3}$-SAT, slightly improving on previous algorithms. The approximation ratio of the new algorithm is $approx 0.9089$. We complement our theoretical results with some experimental results. We describe an approximation algorithm for almost satisfiable instances of MAX NAE-${3,5}$-SAT with a conjectured approximation ratio of 0.8728, and an approximation algorithm for almost satisfiable instances of MAX NAE-SAT with a conjectured approximation ratio of 0.8698.
Over the years complexity theorists have proposed many structural parameters to explain the surprising efficiency of conflict-driven clause-learning (CDCL) SAT solvers on a wide variety of large industrial Boolean instances. While some of these param eters have been studied empirically, until now there has not been a unified comparative study of their explanatory power on a comprehensive benchmark. We correct this state of affairs by conducting a large-scale empirical evaluation of CDCL SAT solver performance on nearly 7000 industrial and crafted formulas against several structural parameters such as backdoors, treewidth, backbones, and community structure. Our study led us to several results. First, we show that while such parameters only weakly correlate with CDCL solving time, certain combinations of them yield much better regression models. Second, we show how some parameters can be used as a lens to better understand the efficiency of different solving heuristics. Finally, we propose a new complexity-theoretic parameter, which we call learning-sensitive with restarts (LSR) backdoors, that extends the notion of learning-sensitive (LS) backdoors to incorporate restarts and discuss algorithms to compute them. We mathematically prove that for certain class of instances minimal LSR-backdoors are exponentially smaller than minimal-LS backdoors.
In this paper, we consider a variant of Ramsey numbers which we call complementary Ramsey numbers $bar{R}(m,t,s)$. We first establish their connections to pairs of Ramsey $(s,t)$-graphs. Using the classification of Ramsey $(s,t)$-graphs for small $s, t$, we determine the complementary Ramsey numbers $bar{R}(m,t,s)$ for $(s,t)=(4,4)$ and $(3,6)$.
116 - Shenghui Su , Shuwang Lv , 2011
It is well known that the inverse function of y = x with the derivative y = 1 is x = y, the inverse function of y = c with the derivative y = 0 is inexistent, and so on. Hence, on the assumption that the noninvertibility of the univariate increasing function y = f(x) with x > 0 is in direct proportion to the growth rate reflected by its derivative, the authors put forward a method of comparing difficulties in inverting two functions on a continuous or discrete interval called asymptotic granularity reduction (AGR) which integrates asymptotic analysis with logarithmic granularities, and is an extension and a complement to polynomial time (Turing) reduction (PTR). Prove by AGR that inverting y = x ^ x (mod p) is computationally harder than inverting y = g ^ x (mod p), and inverting y = g ^ (x ^ n) (mod p) is computationally equivalent to inverting y = g ^ x (mod p), which are compatible with the results from PTR. Besides, apply AGR to the comparison of inverting y = x ^ n (mod p) with y = g ^ x (mod p), y = g ^ (g1 ^ x) (mod p) with y = g ^ x (mod p), and y = x ^ n + x + 1 (mod p) with y = x ^ n (mod p) in difficulty, and observe that the results are consistent with existing facts, which further illustrates that AGR is suitable for comparison of inversion problems in difficulty. Last, prove by AGR that inverting y = (x ^ n)(g ^ x) (mod p) is computationally equivalent to inverting y = g ^ x (mod p) when PTR can not be utilized expediently. AGR with the assumption partitions the complexities of problems more detailedly, and finds out some new evidence for the security of cryptosystems.
We propose a new approach to SAT solving which solves SAT problems in vector spaces as a cost minimization problem of a non-negative differentiable cost function J^sat. In our approach, a solution, i.e., satisfying assignment, for a SAT problem in n variables is represented by a binary vector u in {0,1}^n that makes J^sat(u) zero. We search for such u in a vector space R^n by cost minimization, i.e., starting from an initial u_0 and minimizing J to zero while iteratively updating u by Newtons method. We implemented our approach as a matrix-based differential SAT solver MatSat. Although existing main-stream SAT solvers decide each bit of a solution assignment one by one, be they of conflict driven clause learning (CDCL) type or of stochastic local search (SLS) type, MatSat fundamentally differs from them in that it continuously approach a solution in a vector space. We conducted an experiment to measure the scalability of MatSat with random 3-SAT problems in which MatSat could find a solution up to n=10^5 variables. We also compared MatSat with four state-of-the-art SAT solvers including winners of SAT competition 2018 and SAT Race 2019 in terms of time for finding a solution, using a random benchmark set from SAT 2018 competition and an artificial random 3-SAT instance set. The result shows that MatSat comes in second in both test sets and outperforms all the CDCL type solvers.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا