ترغب بنشر مسار تعليمي؟ اضغط هنا

MaxSAT Resolution and Subcube Sums

103   0   0.0 ( 0 )
 نشر من قبل Gaurav Sood
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We study the MaxRes rule in the context of certifying unsatisfiability. We show that it can be exponentially more powerful than tree-like resolution, and when augmented with weakening (the system MaxResW), p-simulates tree-like resolution. In devising a lower bound technique specific to MaxRes (and not merely inheriting lower bounds from Res), we define a new proof system called the SubCubeSums proof system. This system, which p-simulates MaxResW, can be viewed as a special case of the semialgebraic Sherali-Adams proof system. In expressivity, it is the integral restriction of conical juntas studied in the contexts of communication complexity and extension complexity. We show that it is not simulated by Res. Using a proof technique qualitatively different from the lower bounds that MaxResW inherits from Res, we show that Tseitin contradictions on expander graphs are hard to refute in SubCubeSums. We also establish a lower bound technique via lifting: for formulas requiring large degree in SubCubeSums, their XOR-ification requires large size in SubCubeSums.


قيم البحث

اقرأ أيضاً

In this paper we study sums of powers of affine functions in (mostly) one variable. Although quite simple, this model is a generalization of two well-studied models: Waring decomposition and sparsest shift. For these three models there are natural ex tensions to several variables, but this paper is mostly focused on univariate polynomials. We present structural results which compare the expressive power of the three models; and we propose algorithms that find the smallest decomposition of f in the first model (sums of affine powers) for an input polynomial f given in dense representation. We also begin a study of the multivariate case. This work could be extended in several directions. In particular, just as for Sparsest Shift and Waring decomposition, one could consider extensions to supersparse polynomials and attempt a fuller study of the multi-variate case. We also point out that the basic univariate problem studied in the present paper is far from completely solved: our algorithms all rely on some assumptions for the exponents in an optimal decomposition, and some algorithms also rely on a distinctness assumption for the shifts. It would be very interesting to weaken these assumptions, or even to remove them entirely. Another related and poorly understood issue is that of the bit size of the constants appearing in an optimal decomposition: is it always polynomially related to the bit size of the input polynomial given in dense representation?
We study the decomposition of multivariate polynomials as sums of powers of linear forms. As one of our main results we give an algorithm for the following problem: given a homogeneous polynomial of degree 3, decide whether it can be written as a s um of cubes of linearly independent linear forms with complex coefficients. Compared to previous algorithms for the same problem, the two main novel features of this algorithm are: (i) It is an algebraic algorithm, i.e., it performs only arithmetic operations and equality tests on the coefficients of the input polynomial. In particular, it does not make any appeal to polynomial factorization. (ii) For an input polynomial with rational coefficients, the algorithm runs in polynomial time when implemented in the bit model of computation. The algorithm relies on methods from linear and multilinear algebra (symmetric tensor decomposition by simultaneous diagonalization). We also give a version of our algorithm for decomposition over the field of real numbers. In this case, the algorithm performs arithmetic operations and comparisons on the input coefficients. Finally we give several related derandomization results on black box polynomial identity testing, the minimization of the number of variables in a polynomial, the computation of Lie algebras and factorization into products of linear forms.
82 - Koji Kobayashi 2013
This article describes about the difference of resolution structure and size between HornSAT and CNFSAT. We can compute HornSAT by using clauses causality. Therefore we can compute proof diagram by using Log space reduction. But we must compute CNFSA T by using clauses correlation. Therefore we cannot compute proof diagram by using Log space reduction, and reduction of CNFSAT is not P-Complete.
We show that there are CNF formulas which can be refuted in resolution in both small space and small width, but for which any small-width proof must have space exceeding by far the linear worst-case upper bound. This significantly strengthens the spa ce-width trade-offs in [Ben-Sasson 09]}, and provides one more example of trade-offs in the supercritical regime above worst case recently identified by [Razborov 16]. We obtain our results by using Razborovs new hardness condensation technique and combining it with the space lower bounds in [Ben-Sasson and Nordstrom 08].
This paper considers the length of resolution proofs when using Krishnamurthys classic symmetry rules. We show that inconsistent linear equation systems of bounded width over a fixed finite field $mathbb{F}_p$ with $p$ a prime have, in their standard encoding as CNFs, polynomial length resolutions when using the local symmetry rule (SRC-II). As a consequence it follows that the multipede instances for the graph isomorphism problem encoded as CNF formula have polynomial length resolution proofs. This contrasts exponential lower bounds for individualization-refinement algorithms on these graphs. For the Cai-Furer-Immerman graphs, for which Toran showed exponential lower bounds for resolution proofs (SAT 2013), we also show that already the global symmetry rule (SRC-I) suffices to allow for polynomial length proofs.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا