Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large b
ut finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a rectified Data-Driven algorithm that is fast and by sampling from the parameters posterior avoids both under- and over-fitting along all the directions of the parameters space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.
Critical slowing down dynamics of supercooled glass-forming liquids is usually understood at the mean-field level in the framework of Mode Coupling Theory, providing a two-time relaxation scenario and power-law behaviors of the time correlation funct
ion at dynamic criticality. In this work we derive critical slowing down exponents of spin-glass models undergoing discontinuous transitions by computing their Gibbs free energy and connecting the dynamic behavior to static in-state properties. Both the spherical and Isi