ترغب بنشر مسار تعليمي؟ اضغط هنا

The asymptotics of the clustering transition for random constraint satisfaction problems

202   0   0.0 ( 0 )
 نشر من قبل Louise Budzynski
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

Random Constraint Satisfaction Problems exhibit several phase transitions when their density of constraints is varied. One of these threshold phenomena, known as the clustering or dynamic transition, corresponds to a transition for an information theoretic problem called tree reconstruction. In this article we study this threshold for two CSPs, namely the bicoloring of $k$-uniform hypergraphs with a density $alpha$ of constraints, and the $q$-coloring of random graphs with average degree $c$. We show that in the large $k,q$ limit the clustering transition occurs for $alpha = frac{2^{k-1}}{k} (ln k + ln ln k + gamma_{rm d} + o(1))$, $c= q (ln q + ln ln q + gamma_{rm d}+ o(1))$, where $gamma_{rm d}$ is the same constant for both models. We characterize $gamma_{rm d}$ via a functional equation, solve the latter numerically to estimate $gamma_{rm d} approx 0.871$, and obtain an analytic lowerbound $gamma_{rm d} ge 1 + ln (2 (sqrt{2}-1)) approx 0.812$. Our analysis unveils a subtle interplay of the clustering transition with the rigidity (naive reconstruction) threshold that occurs on the same asymptotic scale at $gamma_{rm r}=1$.



قيم البحث

اقرأ أيضاً

The typical complexity of Constraint Satisfaction Problems (CSPs) can be investigated by means of random ensembles of instances. The latter exhibit many threshold phenomena besides their satisfiability phase transition, in particular a clustering or dynamic phase transition (related to the tree reconstruction problem) at which their typical solutions shatter into disconnected components. In this paper we study the evolution of this phenomenon under a bias that breaks the uniformity among solutions of one CSP instance, concentrating on the bicoloring of k-uniform random hypergraphs. We show that for small k the clustering transition can be delayed in this way to higher density of constraints, and that this strategy has a positive impact on the performances of Simulated Annealing algorithms. We characterize the modest gain that can be expected in the large k limit from the simple implementation of the biasing idea studied here. This paper contains also a contribution of a more methodological nature, made of a review and extension of the methods to determine numerically the discontinuous dynamic transition threshold.
We investigate the clustering transition undergone by an exemplary random constraint satisfaction problem, the bicoloring of $k$-uniform random hypergraphs, when its solutions are weighted non-uniformly, with a soft interaction between variables belo nging to distinct hyperedges. We show that the threshold $alpha_{rm d}(k)$ for the transition can be further increased with respect to a restricted interaction within the hyperedges, and perform an asymptotic expansion of $alpha_{rm d}(k)$ in the large $k$ limit. We find that $alpha_{rm d}(k) = frac{2^{k-1}}{k}(ln k + ln ln k + gamma_{rm d} + o(1))$, where the constant $gamma_{rm d}$ is strictly larger than for the uniform measure over solutions.
Random constraint satisfaction problems undergo several phase transitions as the ratio between the number of constraints and the number of variables is varied. When this ratio exceeds the satisfiability threshold no more solutions exist; the satisfia ble phase, for less constrained problems, is itself divided in an unclustered regime and a clustered one. In the latter solutions are grouped in clusters of nearby solutions separated in configuration space from solutions of other clusters. In addition the rigidity transition signals the appearance of so-called frozen variables in typical solutions: beyond this threshold most solutions belong to clusters with an extensive number of variables taking the same values in all solutions of the cluster. In this paper we refine the description of this phenomenon by estimating the location of the freezing transition, corresponding to the disappearance of all unfrozen solutions (not only typical ones). We also unveil phase transitions for the existence and uniqueness of locked solutions, in which all variables are frozen. From a technical point of view we characterize atypical solutions with a number of frozen variables different from the typical value via a large deviation study of the dynamics of a stripping process (whitening) that unveils the frozen variables of a solution, building upon recent works on atypical trajectories of the bootstrap percolation dynamics. Our results also bear some relevance from an algorithmic perspective, previous numerical studies having shown that heuristic algorithms of various kinds usually output unfrozen solutions.
We study the phase diagram and the algorithmic hardness of the random `locked constraint satisfaction problems, and compare them to the commonly studied non-locked problems like satisfiability of boolean formulas or graph coloring. The special proper ty of the locked problems is that clusters of solutions are isolated points. This simplifies significantly the determination of the phase diagram, which makes the locked problems particularly appealing from the mathematical point of view. On the other hand we show empirically that the clustered phase of these problems is extremely hard from the algorithmic point of view: the best known algorithms all fail to find solutions. Our results suggest that the easy/hard transition (for currently known algorithms) in the locked problems coincides with the clustering transition. These should thus be regarded as new benchmarks of really hard constraint satisfaction problems.
We introduce a novel Entropy-driven Monte Carlo (EdMC) strategy to efficiently sample solutions of random Constraint Satisfaction Problems (CSPs). First, we extend a recent result that, using a large-deviation analysis, shows that the geometry of the space of solutions of the Binary Perceptron Learning Problem (a prototypical CSP), contains regions of very high-density of solutions. Despite being sub-dominant, these regions can be found by optimizing a local entropy measure. Building on these results, we construct a fast solver that relies exclusively on a local entropy estimate, and can be applied to general CSPs. We describe its performance not only for the Perceptron Learning Problem but also for the random $K$-Satisfiabilty Problem (another prototypical CSP with a radically different structure), and show numerically that a simple zero-temperature Metropolis search in the smooth local entropy landscape can reach sub-dominant clusters of optimal solutions in a small number of steps, while standard Simulated Annealing either requires extremely long cooling procedures or just fails. We also discuss how the EdMC can heuristically be made even more efficient for the cases we studied.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا