ترغب بنشر مسار تعليمي؟ اضغط هنا

Estimating errors reliably in Monte Carlo simulations of the Ehrenfest model

165   0   0.0 ( 0 )
 نشر من قبل Matthias Troyer
 تاريخ النشر 2009
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Using the Ehrenfest urn model we illustrate the subtleties of error estimation in Monte Carlo simulations. We discuss how the smooth results of correlated sampling in Markov chains can fool ones perception of the accuracy of the data, and show (via numerical and analytical methods) how to obtain reliable error estimates from correlated samples.



قيم البحث

اقرأ أيضاً

We introduce a variant of the Hybrid Monte Carlo (HMC) algorithm to address large-deviation statistics in stochastic hydrodynamics. Based on the path-integral approach to stochastic (partial) differential equations, our HMC algorithm samples space-ti me histories of the dynamical degrees of freedom under the influence of random noise. First, we validate and benchmark the HMC algorithm by reproducing multiscale properties of the one-dimensional Burgers equation driven by Gaussian and white-in-time noise. Second, we show how to implement an importance sampling protocol to significantly enhance, by orders of magnitudes, the probability to sample extreme and rare events, making it possible to estimate moments of field variables of extremely high order (up to 30 and more). By employing reweighting techniques, we map the biased configurations back to the original probability measure in order to probe their statistical importance. Finally, we show that by biasing the system towards very intense negative gradients, the HMC algorithm is able to explore the statistical fluctuations around instanton configurations. Our results will also be interesting and relevant in lattice gauge theory since they provide insight into reweighting techniques.
Ising Monte Carlo simulations of the random-field Ising system Fe(0.80)Zn(0.20)F2 are presented for H=10T. The specific heat critical behavior is consistent with alpha approximately 0 and the staggered magnetization with beta approximately 0.25 +- 0.03.
Parallel tempering Monte Carlo has proven to be an efficient method in optimization and sampling applications. Having an optimized temperature set enhances the efficiency of the algorithm through more-frequent replica visits to the temperature limits . The approaches for finding an optimal temperature set can be divided into two main categories. The methods of the first category distribute the replicas such that the swapping ratio between neighbouring replicas is constant and independent of the temperature values. The second-category techniques including the feedback-optimized method, on the other hand, aim for a temperature distribution that has higher density at simulation bottlenecks, resulting in temperature-dependent replica-exchange probabilities. In this paper, we compare the performance of various temperature setting methods on both sparse and fully-connected spin-glass problems as well as fully-connected Wishart problems that have planted solutions. These include two classes of problems that have either continuous or discontinuous phase transitions in the order parameter. Our results demonstrate that there is no performance advantage for the methods that promote nonuniform swapping probabilities on spin-glass problems where the order parameter has a smooth transition between phases at the critical temperature. However, on Wishart problems that have a first-order phase transition at low temperatures, the feedback-optimized method exhibits a time-to-solution speedup of at least a factor of two over the other approaches.
In this work we study the thermodynamic properties of ultrathin ferromagnetic dots using Monte Carlo simulations. We investigate the vortex density as a function of the temperature and the vortex structure in monolayer dots with perpendicular anisotr opy and long-range dipole interaction. The interplay between these two terms in the hamiltonian leads to an interesting behavior of the thermodynamic quantities as well as the vortex density.
A modern graphics processing unit (GPU) is able to perform massively parallel scientific computations at low cost. We extend our implementation of the checkerboard algorithm for the two dimensional Ising model [T. Preis et al., J. Comp. Phys. 228, 44 68 (2009)] in order to overcome the memory limitations of a single GPU which enables us to simulate significantly larger systems. Using multi-spin coding techniques, we are able to accelerate simulations on a single GPU by factors up to 35 compared to an optimized single Central Processor Unit (CPU) core implementation which employs multi-spin coding. By combining the Compute Unified Device Architecture (CUDA) with the Message Parsing Interface (MPI) on the CPU level, a single Ising lattice can be updated by a cluster of GPUs in parallel. For large systems, the computation time scales nearly linearly with the number of GPUs used. As proof of concept we reproduce the critical temperature of the 2D Ising model using finite size scaling techniques.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا