ترغب بنشر مسار تعليمي؟ اضغط هنا

A Real World Mechanism for Testing Satisfiability in Polynomial Time

94   0   0.0 ( 0 )
 نشر من قبل Bernd Schuh
 تاريخ النشر 2010
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Bernd R. Schuh




اسأل ChatGPT حول البحث

Whether the satisfiability of any formula F of propositional calculus can be determined in polynomial time is an open question. I propose a simple procedure based on some real world mechanisms to tackle this problem. The main result is the blueprint for a machine which is able to test any formula in conjunctive normal form (CNF) for satisfiability in linear time. The device uses light and some electrochemical properties to function. It adapts itself to the scope of the problem without growing exponentially in mass with the size of the formula. It requires infinite precision in its components instead.

قيم البحث

اقرأ أيضاً

418 - Yuxin Deng 2011
We introduce a notion of real-valued reward testing for probabilistic processes by extending the traditional nonnegative-reward testing with negative rewards. In this richer testing framework, the may and must preorders turn out to be inverses. We sh ow that for convergent processes with finitely many states and transitions, but not in the presence of divergence, the real-reward must-testing preorder coincides with the nonnegative-reward must-testing preorder. To prove this coincidence we characterise the usual resolution-based testing in terms of the weak transitions of processes, without having to involve policies, adversaries, schedulers, resolutions, or similar structures that are external to the process under investigation. This requires establishing the continuity of our function for calculating testing outcomes.
140 - Peter Hertling 2014
Lanford has shown that Feigenbaums functional equation has an analytic solution. We show that this solution is a polynomial time computable function. This implies in particular that the so-called first Feigenbaum constant is a polynomial time computable real number.
We give the first dimension-efficient algorithms for learning Rectified Linear Units (ReLUs), which are functions of the form $mathbf{x} mapsto max(0, mathbf{w} cdot mathbf{x})$ with $mathbf{w} in mathbb{S}^{n-1}$. Our algorithm works in the challeng ing Reliable Agnostic learning model of Kalai, Kanade, and Mansour (2009) where the learner is given access to a distribution $cal{D}$ on labeled examples but the labeling may be arbitrary. We construct a hypothesis that simultaneously minimizes the false-positive rate and the loss on inputs given positive labels by $cal{D}$, for any convex, bounded, and Lipschitz loss function. The algorithm runs in polynomial-time (in $n$) with respect to any distribution on $mathbb{S}^{n-1}$ (the unit sphere in $n$ dimensions) and for any error parameter $epsilon = Omega(1/log n)$ (this yields a PTAS for a question raised by F. Bach on the complexity of maximizing ReLUs). These results are in contrast to known efficient algorithms for reliably learning linear threshold functions, where $epsilon$ must be $Omega(1)$ and strong assumptions are required on the marginal distribution. We can compose our results to obtain the first set of efficient algorithms for learning constant-depth networks of ReLUs. Our techniques combine kernel methods and polynomial approximations with a dual-loss approach to convex programming. As a byproduct we obtain a number of applications including the first set of efficient algorithms for convex piecewise-linear fitting and the first efficient algorithms for noisy polynomial reconstruction of low-weight polynomials on the unit sphere.
In this paper we present a portfolio LTL-satisfiability solver, called Polsat. To achieve fast satisfiability checking for LTL formulas, the tool integrates four representative LTL solvers: pltl, TRP++, NuSMV, and Aalta. The idea of Polsat is to run the component solvers in parallel to get best overall performance; once one of the solvers terminates, it stops all other solvers. Remarkably, the Polsat solver utilizes the power of modern multi-core compute clusters. The empirical experiments show that Polsat takes advantages of it. Further, Polsat is also a testing plat- form for all LTL solvers.
192 - Patrick Ah-Fat 2016
Partial methods play an important role in formal methods and beyond. Recently such methods were developed for parity games, where polynomial-time partial solvers decide the winners of a subset of nodes. We investigate here how effective polynomial-ti me partial solvers can be by studying interactions of partial solvers based on generic composition patterns that preserve polynomial-time computability. We show that use of such composition patterns discovers new partial solvers - including those that merge node sets that have the same but unknown winner - by studying games that composed partial solvers can neither solve nor simplify. We experimentally validate that this data-driven approach to refinement leads to polynomial-time partial solvers that can solve all standard benchmarks of structured games. For one of these polynomial-time partial solvers not even a sole random game from a few billion random games of varying configuration was found that it wont solve completely.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا