ترغب بنشر مسار تعليمي؟ اضغط هنا

Boundary Evolution Algorithm for SAT-NP

155   0   0.0 ( 0 )
 نشر من قبل Zhaoyang Ai
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

A boundary evolution Algorithm (BEA) is proposed by simultaneously taking into account the bottom and the high-level crossover and mutation, ie., the boundary of the hierarchical genetic algorithm. Operators and optimal individuals based on optional annealing are designed. Based on the numerou



قيم البحث

اقرأ أيضاً

We present a (full) derandomization of HSSW algorithm for 3-SAT, proposed by Hofmeister, Schoning, Schuler, and Watanabe in [STACS02]. Thereby, we obtain an O(1.3303^n)-time deterministic algorithm for 3-SAT, which is currently fastest.
CNF-based SAT and MaxSAT solvers are central to logic synthesis and verification systems. The increasing popularity of these constraint problems in electronic design automation encourages studies on different SAT problems and their properties for fur ther computational efficiency. There has been both theoretical and practical success of modern Conflict-driven clause learning SAT solvers, which allows solving very large industrial instances in a relatively short amount of time. Recently, machine learning approaches provide a new dimension to solving this challenging problem. Neural symbolic models could serve as generic solvers that can be specialized for specific domains based on data without any changes to the structure of the model. In this work, we propose a one-shot model derived from the Transformer architecture to solve the MaxSAT problem, which is the optimization version of SAT where the goal is to satisfy the maximum number of clauses. Our model has a scale-free structure which could process varying size of instances. We use meta-path and self-attention mechanism to capture interactions among homogeneous nodes. We adopt cross-attention mechanisms on the bipartite graph to capture interactions among heterogeneous nodes. We further apply an iterative algorithm to our model to satisfy additional clauses, enabling a solution approaching that of an exact-SAT problem. The attention mechanisms leverage the parallelism for speedup. Our evaluation indicates improved speedup compared to heuristic approaches and improved completion rate compared to machine learning approaches.
206 - Ke Tang , Shengcai Liu , Peng Yang 2020
Generalization, i.e., the ability of solving problem instances that are not available during the system design and development phase, is a critical goal for intelligent systems. A typical way to achieve good generalization is to learn a model from va st data. In the context of heuristic search, such a paradigm could be implemented as configuring the parameters of a parallel algorithm portfolio (PAP) based on a set of training problem instances, which is often referred to as PAP construction. However, compared to traditional machine learning, PAP construction often suffers from the lack of training instances, and the obtained PAPs may fail to generalize well. This paper proposes a novel competitive co-evolution scheme, named Co-Evolution of Parameterized Search (CEPS), as a remedy to this challenge. By co-evolving a configuration population and an instance population, CEPS is capable of obtaining generalizable PAPs with few training instances. The advantage of CEPS in improving generalization is analytically shown in this paper. Two concrete algorithms, namely CEPS-TSP and CEPS-VRPSPDTW, are presented for the Traveling Salesman Problem (TSP) and the Vehicle Routing Problem with Simultaneous Pickup-Delivery and Time Windows (VRPSPDTW), respectively. Experimental results show that CEPS has led to better generalization, and even managed to find new best-known solutions for some instances.
Generative adversarial networks (GANs) are pairs of artificial neural networks that are trained one against each other. The outputs from a generator are mixed with the real-world inputs to the discriminator and both networks are trained until an equi librium is reached, where the discriminator cannot distinguish generated inputs from real ones. Since their introduction, GANs have allowed for the generation of impressive imitations of real-life films, images and texts, whose fakeness is barely noticeable to humans. Despite their impressive performance, training GANs remains to this day more of an art than a reliable procedure, in a large part due to training process stability. Generators are susceptible to mode dropping and convergence to random patterns, which have to be mitigated by computationally expensive multiple restarts. Curiously, GANs bear an uncanny similarity to a co-evolution of a pathogen and its hosts immune system in biology. In a biological context, the majority of potential pathogens indeed never make it and are kept at bay by the hots immune system. Yet some are efficient enough to present a risk of a serious condition and recurrent infections. Here, we explore that similarity to propose a more robust algorithm for GANs training. We empirically show the increased stability and a better ability to generate high-quality images while using less computational power.
In complexity theory, there exists a famous unsolved problem whether NP can be P or not. In this paper, we discuss this aspect in SAT (satisfiability) problem, and it is shown that the SAT can be solved in plynomial time by means of quantum algorithm.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا