ﻻ يوجد ملخص باللغة العربية
Given a target function $U$ to minimize on a finite state space $mathcal{X}$, a proposal chain with generator $Q$ and a cooling schedule $T(t)$ that depends on time $t$, in this paper we study two types of simulated annealing (SA) algorithms with generators $M_{1,t}(Q,U,T(t))$ and $M_{2,t}(Q,U,T(t))$ respectively. While $M_{1,t}$ is the classical SA algorithm, we introduce a simple and improved variant that we call $M_{2,t}$ which provably converges faster. When $T(t) > c_{M_2}/log(t+1)$ follows the logarithmic cooling schedule, our proposed algorithm is strongly ergodic both in total variation and in relative entropy, and converges to the set of global minima, where $c_{M_2}$ is a constant that we explicitly identify. If $c_{M_1}$ is the optimal hill-climbing constant that appears in logarithmic cooling of $M_{1,t}$, we show that $c_{M_1} geq c_{M_2}$ and give simple conditions under which $c_{M_1} > c_{M_2}$. Our proposed $M_{2,t}$ thus converges under a faster logarithmic cooling in this regime. The other situation that we investigate corresponds to $c_{M_1} > c_{M_2} = 0$, where we give a class of fast and non-logarithmic cooling schedule that works for $M_{2,t}$ (but not for $M_{1,t}$). In addition to these asymptotic convergence results, we compare and analyze finite-time behaviour between these two annealing algorithms as well. Finally, we present two algorithms to simulate $M_{2,t}$.
Using classical simulated annealing to maximise a function $psi$ defined on a subset of $R^d$, the probability $p(psi(theta_n)leq psi_{max}-epsilon)$ tends to zero at a logarithmic rate as $n$ increases; here $theta_n$ is the state in the $n$-th stag
We solve a problem of non-convex stochastic optimisation with help of simulated annealing of Levy flights of a variable stability index. The search of the ground state of an unknown potential is non-local due to big jumps of the Levy flights process.
Finding a ground state of a given Hamiltonian on a graph $G=(V,E)$ is an important but hard problem. One of the potential methods is to use a Markov chain Monte Carlo to sample the Gibbs distribution whose highest peaks correspond to the ground state
We propose a new stochastic algorithm (generalized simulated annealing) for computationally finding the global minimum of a given (not necessarily convex) energy/cost function defined in a continuous D-dimensional space. This algorithm recovers, as p
We consider the minimization problem of $phi$-divergences between a given probability measure $P$ and subsets $Omega$ of the vector space $mathcal{M}_mathcal{F}$ of all signed finite measures which integrate a given class $mathcal{F}$ of bounded or u