No Arabic abstract
Identifying the relevant coarse-grained degrees of freedom in a complex physical system is a key stage in developing powerful effective theories in and out of equilibrium. The celebrated renormalization group provides a framework for this task, but its practical execution in unfamiliar systems is fraught with ad hoc choices, whereas machine learning approaches, though promising, often lack formal interpretability. Recently, the optimal coarse-graining in a statistical system was shown to exist, based on a universal, but computationally difficult information-theoretic variational principle. This limited its applicability to but the simplest systems; moreover, the relation to standard formalism of field theory was unclear. Here we present an algorithm employing state-of-art results in machine-learning-based estimation of information-theoretic quantities, overcoming these challenges. We use this advance to develop a new paradigm in identifying the most relevant field theory operators describing properties of the system, going beyond the existing approaches to real-space renormalization. We evidence its power on an interacting model, where the emergent degrees of freedom are qualitatively different from the microscopic building blocks of the theory. Our results push the boundary of formally interpretable applications of machine learning, conceptually paving the way towards automated theory building.
Optimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the NP-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an NP-complete problem the practically arising instances might, in fact, be easy to solve. The principal question we address in this thesis is: How to recognize if an NP-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfiability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named locked constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfiability.
We propose a statistical mechanics for a general class of stationary and metastable equilibrium states. For this purpose, the Gibbs extremal conditions are slightly modified in order to be applied to a wide class of non-equilibrium states. As usual, it is assumed that the system maximizes the entropy functional $S$, subjected to the standard conditions; i.e., constant energy and normalization of the probability distribution. However, an extra conserved constraint function $F$ is also assumed to exist, which forces the system to remain in the metastable configuration. Further, after assuming additivity for two quasi-independent subsystems, and that the new constraint commutes with density matrix $rho$, it is argued that F should be an homogeneous function of the density matrix, at least for systems in which the spectrum is sufficiently dense to be considered as continuous. The explicit form of $F$ turns to be $F(p_{i})=p_{i}^{q}$, where $p_i$ are the eigenvalues of the density matrix and $q$ is a real number to be determined. This $q$ number appears as a kind of Tsallis parameter having the interpretation of the order of homogeneity of the constraint $F$. The procedure is applied to describe the results of the plasma experiment of Huang and Driscoll. The experimentally measured density is predicted with a similar precision as it is done with the use of the extremum of the enstrophy and Tsallis procedures. However, the present results define the density at all the radial positions. In particular, the smooth tail shown by the experimental distribution turns to be predicted by the procedure. In this way, the scheme avoids the non-analyticity of the density profile at large distances arising in both of the mentioned alternative procedures.
We demonstrate that with appropriate quantum correlation function, a real-space network model can be constructed to study the phase transitions in quantum systems. For the three-dimensional bosonic system, the single-particle density matrix is adopted to construct the adjacency matrix. We show that the Bose-Einstein condensate transition can be interpreted as the transition into a small-world network, which is accurately captured by the small-world coefficient. For the one-dimensional disordered system, using the electron diffusion operator to build the adjacency matrix, we find that the Anderson localized states create many weakly-linked subgraphs, which significantly reduces the clustering coefficient and lengthens the shortest path. We show that the crossover from delocalized to localized regimes as a function of the disorder strength can be identified as the loss of global connection, which is revealed by the small-world coefficient as well as other independent measures like the robustness, the efficiency, and the algebraic connectivity. Our results suggest that the quantum phase transitions can be visualized in real space and characterized by the network analysis with suitable choices of quantum correlation functions.
We introduce a general method for optimizing real-space renormalization-group transformations to study the critical properties of a classical system. The scheme is based on minimizing the Kullback-Leibler divergence between the distribution of the system and the normalized normalizing factor of the transformation parametrized by a restricted Boltzmann machine. We compute the thermal critical exponent of the two-dimensional Ising model using the trained optimal projector and obtain a very accurate thermal critical exponent $y_t=1.0001(11)$ after the first step of the transformation.
The majority game, modelling a system of heterogeneous agents trying to behave in a similar way, is introduced and studied using methods of statistical mechanics. The stationary states of the game are given by the (local) minima of a particular Hopfield like hamiltonian. On the basis of a replica symmetric calculations, we draw the phase diagram, which contains the analog of a retrieval phase. The number of metastable states is estimated using the annealed approximation. The results are confronted with extensive numerical simulations.