ترغب بنشر مسار تعليمي؟ اضغط هنا

Chook -- A comprehensive suite for generating binary optimization problems with planted solutions

78   0   0.0 ( 0 )
 نشر من قبل Helmut Katzgraber
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We present Chook, an open-source Python-based tool to generate discrete optimization problems of tunable complexity with a priori known solutions. Chook provides a cross-platform unified environment for solution planting using a number of techniques, such as tile planting, Wishart planting, equation planting, and deceptive cluster loop planting. Chook also incorporates planted solutions for higher-order (beyond quadratic) binary optimization problems. The support for various planting schemes and the tunable hardness allows the user to generate problems with a wide range of complexity on different graph topologies ranging from hypercubic lattices to fully-connected graphs.

قيم البحث

اقرأ أيضاً

The availability of quantum annealing devices with hundreds of qubits has made the experimental demonstration of a quantum speedup for optimization problems a coveted, albeit elusive goal. Going beyond earlier studies of random Ising problems, here w e introduce a method to construct a set of frustrated Ising-model optimization problems with tunable hardness. We study the performance of a D-Wave Two device (DW2) with up to 503 qubits on these problems and compare it to a suite of classical algorithms, including a highly optimized algorithm designed to compete directly with the DW2. The problems are generated around predetermined ground-state configurations, called planted solutions, which makes them particularly suitable for benchmarking purposes. The problem set exhibits properties familiar from constraint satisfaction (SAT) problems, such as a peak in the typical hardness of the problems, determined by a tunable clause density parameter. We bound the hardness regime where the DW2 device either does not or might exhibit a quantum speedup for our problem set. While we do not find evidence for a speedup for the hardest and most frustrated problems in our problem set, we cannot rule out that a speedup might exist for some of the easier, less frustrated problems. Our empirical findings pertain to the specific D-Wave processor and problem set we studied and leave open the possibility that future processors might exhibit a quantum speedup on the same problem set.
In order to treat all-to-all connected quadratic binary optimization problems (QUBO) with hardware quantum annealers, an embedding of the original problem is required due to the sparsity of the hardwares topology. Embedding fully-connected graphs -- typically found in industrial applications -- incurs a quadratic space overhead and thus a significant overhead in the time to solution. Here we investigate this embedding penalty of established planar embedding schemes such as minor embedding on a square lattice, minor embedding on a Chimera graph, and the Lechner-Hauke-Zoller scheme using simulated quantum annealing on classical hardware. Large-scale quantum Monte Carlo simulation suggest a polynomial time-to-solution overhead. Our results demonstrate that standard analog quantum annealing hardware is at a disadvantage in comparison to classical digital annealers, as well as gate-model quantum annealers and could also serve as benchmark for improvements of the standard quantum annealing protocol.
We investigate the computational hardness of spin-glass instances on a square lattice, generated via a recently introduced tunable and scalable approach for planting solutions. The method relies on partitioning the problem graph into edge-disjoint su bgraphs, and planting frustrated, elementary subproblems that share a common local ground state, which guarantees that the ground state of the entire problem is known a priori. Using population annealing Monte Carlo, we compare the typical hardness of problem classes over a large region of the multi-dimensional tuning parameter space. Our results show that the problems have a wide range of tunable hardness. Moreover, we observe multiple transitions in the hardness phase space, which we further corroborate using simulated annealing and simulated quantum annealing. By investigating thermodynamic properties of these planted systems, we demonstrate that the harder samples undergo magnetic ordering transitions which are also ultimately responsible for the observed hardness transitions on changing the sample composition.
Rule-based models are often used for data analysis as they combine interpretability with predictive power. We present RuleKit, a versatile tool for rule learning. Based on a sequential covering induction algorithm, it is suitable for classification, regression, and survival problems. The presence of a user-guided induction facilitates verifying hypotheses concerning data dependencies which are expected or of interest. The powerful and flexible experimental environment allows straightforward investigation of different induction schemes. The analysis can be performed in batch mode, through RapidMiner plug-in, or R package. A documented Java API is also provided for convenience. The software is publicly available at GitHub under GNU AGPL-3.0 license.
Recently, there has been considerable interest in solving optimization problems by mapping these onto a binary representation, sparked mostly by the use of quantum annealing machines. Such binary representation is reminiscent of a discrete physical t wo-state system, such as the Ising model. As such, physics-inspired techniques -- commonly used in fundamental physics studies -- are ideally suited to solve optimization problems in a binary format. While binary representations can be often found for paradigmatic optimization problems, these typically result in k-local higher-order unconstrained binary optimization cost functions. In this work, we discuss the effects of locality reduction needed for the majority of the currently available quantum and quantum-inspired solvers that can only accommodate 2-local (quadratic) cost functions. General locality reduction approaches require the introduction of ancillary variables which cause an overhead over the native problem. Using a parallel tempering Monte Carlo solver on Microsoft Azure Quantum, as well as k-local binary problems with planted solutions, we show that post reduction to a corresponding 2-local representation the problems become considerably harder to solve. We further quantify the increase in computational hardness introduced by the reduction algorithm by measuring the variation of number of variables, statistics of the coefficient values, and the population annealing entropic family size. Our results demonstrate the importance of avoiding locality reduction when solving optimization problems.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا