ترغب بنشر مسار تعليمي؟ اضغط هنا

Statistical Physics of Hard Optimization Problems

143   0   0.0 ( 0 )
 نشر من قبل Lenka Zdeborova
 تاريخ النشر 2008
  مجال البحث فيزياء
والبحث باللغة English
 تأليف Lenka Zdeborova




اسأل ChatGPT حول البحث

Optimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the NP-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an NP-complete problem the practically arising instances might, in fact, be easy to solve. The principal question we address in this thesis is: How to recognize if an NP-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfiability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named locked constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfiability.



قيم البحث

اقرأ أيضاً

We introduce and study the random locked constraint satisfaction problems. When increasing the density of constraints, they display a broad clustered phase in which the space of solutions is divided into many isolated points. While the phase diagram can be found easily, these problems, in their clustered phase, are extremely hard from the algorithmic point of view: the best known algorithms all fail to find solutions. We thus propose new benchmarks of really hard optimization problems and provide insight into the origin of their typical hardness.
170 - Guilhem Semerjian 2007
The set of solutions of random constraint satisfaction problems (zero energy groundstates of mean-field diluted spin glasses) undergoes several structural phase transitions as the amount of constraints is increased. This set first breaks down into a large number of well separated clusters. At the freezing transition, which is in general distinct from the clustering one, some variables (spins) take the same value in all solutions of a given cluster. In this paper we study the critical behavior around the freezing transition, which appears in the unfrozen phase as the divergence of the sizes of the rearrangements induced in response to the modification of a variable. The formalism is developed on generic constraint satisfaction problems and applied in particular to the random satisfiability of boolean formulas and to the coloring of random graphs. The computation is first performed in random tree ensembles, for which we underline a connection with percolation models and with the reconstruction problem of information theory. The validity of these results for the original random ensembles is then discussed in the framework of the cavity method.
Identifying the relevant coarse-grained degrees of freedom in a complex physical system is a key stage in developing powerful effective theories in and out of equilibrium. The celebrated renormalization group provides a framework for this task, but i ts practical execution in unfamiliar systems is fraught with ad hoc choices, whereas machine learning approaches, though promising, often lack formal interpretability. Recently, the optimal coarse-graining in a statistical system was shown to exist, based on a universal, but computationally difficult information-theoretic variational principle. This limited its applicability to but the simplest systems; moreover, the relation to standard formalism of field theory was unclear. Here we present an algorithm employing state-of-art results in machine-learning-based estimation of information-theoretic quantities, overcoming these challenges. We use this advance to develop a new paradigm in identifying the most relevant field theory operators describing properties of the system, going beyond the existing approaches to real-space renormalization. We evidence its power on an interacting model, where the emergent degrees of freedom are qualitatively different from the microscopic building blocks of the theory. Our results push the boundary of formally interpretable applications of machine learning, conceptually paving the way towards automated theory building.
In this letter we propose the use of physics techniques for entropy determination on constrained parameter optimization problems. The main feature of such techniques, the construction of an unbiased walk on energy space, suggests their use on the que st for optimal solutions of an optimization problem. Moreover, the entropy, and its associated density of states, give us information concerning the feasibility of solutions.
131 - A. Cabo , S. Curilef , A. Gonzalez 2009
We propose a statistical mechanics for a general class of stationary and metastable equilibrium states. For this purpose, the Gibbs extremal conditions are slightly modified in order to be applied to a wide class of non-equilibrium states. As usual, it is assumed that the system maximizes the entropy functional $S$, subjected to the standard conditions; i.e., constant energy and normalization of the probability distribution. However, an extra conserved constraint function $F$ is also assumed to exist, which forces the system to remain in the metastable configuration. Further, after assuming additivity for two quasi-independent subsystems, and that the new constraint commutes with density matrix $rho$, it is argued that F should be an homogeneous function of the density matrix, at least for systems in which the spectrum is sufficiently dense to be considered as continuous. The explicit form of $F$ turns to be $F(p_{i})=p_{i}^{q}$, where $p_i$ are the eigenvalues of the density matrix and $q$ is a real number to be determined. This $q$ number appears as a kind of Tsallis parameter having the interpretation of the order of homogeneity of the constraint $F$. The procedure is applied to describe the results of the plasma experiment of Huang and Driscoll. The experimentally measured density is predicted with a similar precision as it is done with the use of the extremum of the enstrophy and Tsallis procedures. However, the present results define the density at all the radial positions. In particular, the smooth tail shown by the experimental distribution turns to be predicted by the procedure. In this way, the scheme avoids the non-analyticity of the density profile at large distances arising in both of the mentioned alternative procedures.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا