We present a deep reinforcement learning framework where a machine agent is trained to search for a policy to generate a ground state for the square ice model by exploring the physical environment. After training, the agent is capable of proposing a sequence of local moves to achieve the goal. Analysis of the trained policy and the state value function indicates that the ice rule and loop-closing condition are learned without prior knowledge. We test the trained policy as a sampler in the Markov chain Monte Carlo and benchmark against the baseline loop algorithm. This framework can be generalized to other models with topological constraints where generation of constraint-preserving states is difficult.
Neutron scattering, a.c. magnetic susceptibility and specific heat studies have been carried out on polycrystalline Dy2Zr2O7. Unlike the pyrochlore spin ice Dy2Ti2O7, Dy2Zr2O7 crystallizes into the fluorite structure and the magnetic Dy3+ moments randomly reside on the corner-sharing tetrahedral sublattice with non-magnetic Zr ions. Antiferromagnetic spin correlations develop below 10 K but remain dynamic down to 40 mK. These correlations extend over the length of two tetrahedra edges and grow to 6 nearest neighbors with the application of a 20 kOe magnetic field. No Paulings residual entropy was observed and by 8 K the full entropy expected for a two level system is released. We propose that the disorder melts the spin ice state seen in the chemically ordered Dy2Ti2O7 compound, but the spins remain dynamic in a disordered, liquid-like state and do not freeze into a glass-like state that one might intuitively expect.
In their seminal paper on scattering by an inhomogeneous solid, Debye and coworkers proposed a simple exponentially decaying function for the two-point correlation function of an idealized class of two-phase random media. Such {it Debye random media}, which have been shown to be realizable, are singularly distinct from all other models of two-phase media in that they are entirely defined by their one- and two-point correlation functions. To our knowledge, there has been no determination of other microstructural descriptors of Debye random media. In this paper, we generate Debye random media in two dimensions using an accelerated Yeong-Torquato construction algorithm. We then ascertain microstructural descriptors of the constructed media, including their surface correlation functions, pore-size distributions, lineal-path function, and chord-length probability density function. Accurate semi-analytic and empirical formulas for these descriptors are devised. We compare our results for Debye random media to those of other popular models (overlapping disks and equilibrium hard disks), and find that the former model possesses a wider spectrum of hole sizes, including a substantial fraction of large holes. Our algorithm can be applied to generate other models defined by their two-point correlation functions, and their other microstructural descriptors can be determined and analyzed by the procedures laid out here.
In this letter, we show how the Survey Propagation algorithm can be generalized to include external forcing messages, and used to address selectively an exponential number of glassy ground states. These capabilities can be used to explore efficiently the space of solutions of random NP-complete constraint satisfaction problems, providing a direct experimental evidence of replica symmetry breaking in large-size instances. Finally, a new lossy data compression protocol is introduced, exploiting as a computational resource the clustered nature of the space of addressable states.
We propose an optimization method of mutual learning which converges into the identical state of optimum ensemble learning within the framework of on-line learning, and have analyzed its asymptotic property through the statistical mechanics method.The proposed model consists of two learning steps: two students independently learn from a teacher, and then the students learn from each other through the mutual learning. In mutual learning, students learn from each other and the generalization error is improved even if the teacher has not taken part in the mutual learning. However, in the case of different initial overlaps(direction cosine) between teacher and students, a student with a larger initial overlap tends to have a larger generalization error than that of before the mutual learning. To overcome this problem, our proposed optimization method of mutual learning optimizes the step sizes of two students to minimize the asymptotic property of the generalization error. Consequently, the optimized mutual learning converges to a generalization error identical to that of the optimal ensemble learning. In addition, we show the relationship between the optimum step size of the mutual learning and the integration mechanism of the ensemble learning.
The cortex exhibits self-sustained highly-irregular activity even under resting conditions, whose origin and function need to be fully understood. It is believed that this can be described as an asynchronous state stemming from the balance between excitation and inhibition, with important consequences for information-processing, though a competing hypothesis claims it stems from critical dynamics. By analyzing a parsimonious neural-network model with excitatory and inhibitory interactions, we elucidate a noise-induced mechanism called Jensens force responsible for the emergence of a novel phase of arbitrarily-low but self-sustained activity, which reproduces all the experimental features of asynchronous states. The simplicity of our framework allows for a deep understanding of asynchronous states from a broad statistical-mechanics perspective and of the phase transitions to other standard phases it exhibits, opening the door to reconcile, asynchronous-state and critical-state hypotheses. We argue that Jensens forces are measurable experimentally and might be relevant in contexts beyond neuroscience.