No Arabic abstract
Recently, Keating, Linden, and Wells cite{KLW} showed that the density of states measure of a nearest-neighbor quantum spin glass model is approximately Gaussian when the number of particles is large. The density of states measure is the ensemble average of the empirical spectral measure of a random matrix; in this paper, we use concentration of measure and entropy techniques together with the result of cite{KLW} to show that in fact, the empirical spectral measure of such a random matrix is almost surely approximately Gaussian itself, with no ensemble averaging. We also extend this result to a spherical quantum spin glass model and to the more general coupling geometries investigated by ErdH{o}s and Schroder.
We investigate the convergence and convergence rate of stochastic training algorithms for Neural Networks (NNs) that, over the years, have spawned from Dropout (Hinton et al., 2012). Modeling that neurons in the brain may not fire, dropout algorithms consist in practice of multiplying the weight matrices of a NN component-wise by independently drawn random matrices with ${0,1}$-valued entries during each iteration of the Feedforward-Backpropagation algorithm. This paper presents a probability theoretical proof that for any NN topology and differentiable polynomially bounded activation functions, if we project the NNs weights into a compact set and use a dropout algorithm, then the weights converge to a unique stationary set of a projected system of Ordinary Differential Equations (ODEs). We also establish an upper bound on the rate of convergence of Gradient Descent (GD) on the limiting ODEs of dropout algorithms for arborescences (a class of trees) of arbitrary depth and with linear activation functions.
We use a probabilistic approach to study the rate of convergence to equilibrium for a collisionless (Knudsen) gas in dimension equal to or larger than 2. The use of a coupling between two stochastic processes allows us to extend and refine, in total variation distance, the polynomial rate of convergence given in [AG11] and [KLT13]. This is, to our knowledge, the first quantitative result in collisionless kinetic theory in dimension equal to or larger than 2 that does not require any symmetry of the domain, nor a monokinetic regime. Our study is also more general in terms of reflection at the boundary: we allow for rather general diffusive reflections and for a specular reflection component.
The cavity and TAP equations are high-dimensional systems of nonlinear equations of the local magnetization in the Sherrington-Kirkpatrick model. In the seminal work [Comm. Math. Phys., 325(1):333-366, 2014], Bolthausen introduced an iterative scheme that produces an asymptotic solution to the TAP equations if the model lies inside the Almeida-Thouless transition line. However, it was unclear if this asymptotic solution coincides with the local magnetization. In this work, motivated by the cavity equations, we introduce a new iterative scheme and establish a weak law of large numbers. We show that our new scheme is asymptotically the same as the so-called Approximate Message Passing algorithm, a generalization of Bolthausens iteration, that has been popularly adapted in compressed sensing, Bayesian inferences, etc. Based on this, we confirm that our cavity iteration and Bolthausens scheme both converge to the local magnetization as long as the overlap is locally uniformly concentrated.
We derive mean-field equations for a general class of ferromagnetic spin systems with an explicit error bound in finite volumes. The proof is based on a link between the mean-field equation and the free convolution formalism of random matrix theory, which we exploit in terms of a dynamical method. We present three sample applications of our results to Ka{c} interactions, randomly diluted models, and models with an asymptotically vanishing external field.
Partially-Observable Markov Decision Processes (POMDPs) are a well-known stochastic model for sequential decision making under limited information. We consider the EXPTIME-hard problem of synthesising policies that almost-surely reach some goal state without ever visiting a bad state. In particular, we are interested in computing the winning region, that is, the set of system configurations from which a policy exists that satisfies the reachability specification. A direct application of such a winning region is the safe exploration of POMDPs by, for instance, restricting the behavior of a reinforcement learning agent to the region. We present two algorithms: A novel SAT-based iterative approach and a decision-diagram based alternative. The empirical evaluation demonstrates the feasibility and efficacy of the approaches.