ترغب بنشر مسار تعليمي؟ اضغط هنا

Resolution structure in HornSAT and CNFSAT

90   0   0.0 ( 0 )
 نشر من قبل Koji Kobayashi
 تاريخ النشر 2013
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Koji Kobayashi




اسأل ChatGPT حول البحث

This article describes about the difference of resolution structure and size between HornSAT and CNFSAT. We can compute HornSAT by using clauses causality. Therefore we can compute proof diagram by using Log space reduction. But we must compute CNFSAT by using clauses correlation. Therefore we cannot compute proof diagram by using Log space reduction, and reduction of CNFSAT is not P-Complete.



قيم البحث

اقرأ أيضاً

109 - Koji Kobayashi 2012
This article describes the solvability of HornSAT and CNFSAT. Unsatisfiable HornCNF have partially ordered set that is made by causation of each clauses. In this partially ordered set, Truth value assignment that is false in each clauses become sim ply connected space. Therefore, if we reduce CNFSAT to HornSAT, we must make such partially ordered set in HornSAT. But CNFSAT have correlations of each clauses, the partially ordered set is not in polynomial size. Therefore, we cannot reduce CNFSAT to HornSAT in polynomial size.
We study the MaxRes rule in the context of certifying unsatisfiability. We show that it can be exponentially more powerful than tree-like resolution, and when augmented with weakening (the system MaxResW), p-simulates tree-like resolution. In devisin g a lower bound technique specific to MaxRes (and not merely inheriting lower bounds from Res), we define a new proof system called the SubCubeSums proof system. This system, which p-simulates MaxResW, can be viewed as a special case of the semialgebraic Sherali-Adams proof system. In expressivity, it is the integral restriction of conical juntas studied in the contexts of communication complexity and extension complexity. We show that it is not simulated by Res. Using a proof technique qualitatively different from the lower bounds that MaxResW inherits from Res, we show that Tseitin contradictions on expander graphs are hard to refute in SubCubeSums. We also establish a lower bound technique via lifting: for formulas requiring large degree in SubCubeSums, their XOR-ification requires large size in SubCubeSums.
188 - Koji Kobayashi 2012
This paper describes about relation between circuit complexity and accept inputs structure in Hamming space by using almost all monotone circuit that emulate deterministic Turing machine (DTM). Circuit family that emulate DTM are almost all monoton e circuit family except some NOT-gate which connect input variables (like negation normal form (NNF)). Therefore, we can analyze DTM limitation by using this NNF Circuit family. NNF circuit have symmetry of OR-gate input line, so NNF circuit cannot identify from OR-gate output line which of OR-gate input line is 1. So NNF circuit family cannot compute sandwich structure effectively (Sandwich structure is two accept inputs that sandwich reject inputs in Hamming space). NNF circuit have to use unique AND-gate to identify each different vector of sandwich structure. That is, we can measure problem complexity by counting different vectors. Some decision problem have characteristic in sandwich structure. Different vectors of Negate HornSAT problem are at most constant length because we can delete constant part of each negative literal in Horn clauses by using definite clauses. Therefore, number of these different vector is at most polynomial size. The other hand, we can design high complexity problem with almost perfct nonlinear (APN) function.
We prove three results on the dimension structure of complexity classes. 1. The Point-to-Set Principle, which has recently been used to prove several new theorems in fractal geometry, has resource-bounded instances. These instances characterize the resource-bounded dimension of a set $X$ of languages in terms of the relativized resource-bounded dimensions of the individual elements of $X$, provided that the former resource bound is large enough to parameterize the latter. Thus for example, the dimension of a class $X$ of languages in EXP is characterized in terms of the relativized p-dimensions of the individual elements of $X$. 2. Every language that is $leq^P_m$-reducible to a p-selective set has p-dimension 0, and this fact holds relative to arbitrary oracles. Combined with a resource-bounded instance of the Point-to-Set Principle, this implies that if NP has positive dimension in EXP, then no quasipolynomial time selective language is $leq^P_m$-hard for NP. 3. If the set of all disjoint pairs of NP languages has dimension 1 in the set of all disjoint pairs of EXP languages, then NP has positive dimension in EXP.
We connect the study of pseudodeterministic algorithms to two major open problems about the structural complexity of $mathsf{BPTIME}$: proving hierarchy theorems and showing the existence of complete problems. Our main contributions can be summarised as follows. 1. We build on techniques developed to prove hierarchy theorems for probabilistic time with advice (Fortnow and Santhanam, FOCS 2004) to construct the first unconditional pseudorandom generator of polynomial stretch computable in pseudodeterministic polynomial time (with one bit of advice) that is secure infinitely often against polynomial-time computations. As an application of this construction, we obtain new results about the complexity of generating and representing prime numbers. 2. Oliveira and Santhanam (STOC 2017) established unconditionally that there is a pseudodeterministic algorithm for the Circuit Acceptance Probability Problem ($mathsf{CAPP}$) that runs in sub-exponential time and is correct with high probability over any samplable distribution on circuits on infinitely many input lengths. We show that improving this running time or obtaining a result that holds for every large input length would imply new time hierarchy theorems for probabilistic time. In addition, we prove that a worst-case polynomial-time pseudodeterministic algorithm for $mathsf{CAPP}$ would imply that $mathsf{BPP}$ has complete problems. 3. We establish an equivalence between pseudodeterministic construction of strings of large $mathsf{rKt}$ complexity (Oliveira, ICALP 2019) and the existence of strong hierarchy theorems for probabilistic time. More generally, these results suggest new approaches for designing pseudodeterministic algorithms for search problems and for unveiling the structure of probabilistic time.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا