ترغب بنشر مسار تعليمي؟ اضغط هنا

A random batch Ewald method for particle systems with Coulomb interactions

130   0   0.0 ( 0 )
 نشر من قبل Zhenli Xu
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We develop a random batch Ewald (RBE) method for molecular dynamics simulations of particle systems with long-range Coulomb interactions, which achieves an $O(N)$ complexity in each step of simulating the $N$-body systems. The RBE method is based on the Ewald splitting for the Coulomb kernel with a random mini-batch type technique introduced to speed up the summation of the Fourier series for the long-range part of the splitting. Importance sampling is employed to reduce the induced force variance by taking advantage of the fast decay property of the Fourier coefficients. The stochastic approximation is unbiased with controlled variance. Analysis for bounded force fields gives some theoretic support of the method. Simulations of two typical problems of charged systems are presented to illustrate the accuracy and efficiency of the RBE method in comparison to the results from the Debye-Huckel theory and the classical Ewald summation, demonstrating that the proposed method has the attractiveness of being easy to implement with the linear scaling and is promising for many practical applications.



قيم البحث

اقرأ أيضاً

133 - Lei Li , Zhenli Xu , Yue Zhao 2020
We propose a fast potential splitting Markov Chain Monte Carlo method which costs $O(1)$ time each step for sampling from equilibrium distributions (Gibbs measures) corresponding to particle systems with singular interacting kernels. We decompose the interacting potential into two parts, one is of long range but is smooth, and the other one is of short range but may be singular. To displace a particle, we first evolve a selected particle using the stochastic differential equation (SDE) under the smooth part with the idea of random batches, as commonly used in stochastic gradient Langevin dynamics. Then, we use the short range part to do a Metropolis rejection. Different from the classical Langevin dynamics, we only run the SDE dynamics with random batch for a short duration of time so that the cost in the first step is $O(p)$, where $p$ is the batch size. The cost of the rejection step is $O(1)$ since the interaction used is of short range. We justify the proposed random-batch Monte Carlo method, which combines the random batch and splitting strategies, both in theory and with numerical experiments. While giving comparable results for typical examples of the Dyson Brownian motion and Lennard-Jones fluids, our method can save more time when compared to the classical Metropolis-Hastings algorithm.
114 - Shi Jin , Lei Li , Yiqun Sun 2020
We investigate several important issues regarding the Random Batch Method (RBM) for second order interacting particle systems. We first show the uniform-in-time strong convergence for second order systems under suitable contraction conditions. Second ly, we propose the application of RBM for singular interaction kernels via kernel splitting strategy, and investigate numerically the application to molecular dynamics.
65 - Shi Jin , Lei Li , Jian-Guo Liu 2018
We develop Random Batch Methods for interacting particle systems with large number of particles. These methods use small but random batches for particle interactions, thus the computational cost is reduced from $O(N^2)$ per time step to $O(N)$, for a system with $N$ particles with binary interactions. On one hand, these methods are efficient Asymptotic-Preserving schemes for the underlying particle systems, allowing $N$-independent time steps and also capture, in the $N to infty$ limit, the solution of the mean field limit which are nonlinear Fokker-Planck equations; on the other hand, the stochastic processes generated by the algorithms can also be regarded as new models for the underlying problems. For one of the methods, we give a particle number independent error estimate under some special interactions. Then, we apply these methods to some representative problems in mathematics, physics, social and data sciences, including the Dyson Brownian motion from random matrix theory, Thomsons problem, distribution of wealth, opinion dynamics and clustering. Numerical results show that the methods can capture both the transient solutions and the global equilibrium in these problems.
60 - Shi Jin , Lei Li 2020
The Random Batch Method proposed in our previous work [Jin et al., J. Comput. Phys., 400(1), 2020] is not only a numerical method for interacting particle systems and its mean-field limit, but also can be viewed as a model of particle system in which particles interact, at discrete time, with randomly selected mini-batch of particles. In this paper we investigate the mean-field limit of this model as the number of particles $N to infty$. Unlike the classical mean field limit for interacting particle systems where the law of large numbers plays the role and the chaos is propagated to later times, the mean field limit now does not rely on the law of large numbers and chaos is imposed at every discrete time. Despite this, we will not only justify this mean-field limit (discrete in time) but will also show that the limit, as the discrete time interval $tau to 0$, approaches to the solution of a nonlinear Fokker-Planck equation arising as the mean-field limit of the original interacting particle system in Wasserstein distance.
57 - Suchuan Dong , Zongwei Li 2021
In extreme learning machines (ELM) the hidden-layer coefficients are randomly set and fixed, while the output-layer coefficients of the neural network are computed by a least squares method. The randomly-assigned coefficients in ELM are known to infl uence its performance and accuracy significantly. In this paper we present a modified batch intrinsic plasticity (modBIP) method for pre-training the random coefficients in the ELM neural networks. The current method is devised based on the same principle as the batch intrinsic plasticity (BIP) method, namely, by enhancing the information transmission in every node of the neural network. It differs from BIP in two prominent aspects. First, modBIP does not involve the activation function in its algorithm, and it can be applied with any activation function in the neural network. In contrast, BIP employs the inverse of the activation function in its construction, and requires the activation function to be invertible (or monotonic). The modBIP method can work with the often-used non-monotonic activation functions (e.g. Gaussian, swish, Gaussian error linear unit, and radial-basis type functions), with which BIP breaks down. Second, modBIP generates target samples on random intervals with a minimum size, which leads to highly accurate computation results when combined with ELM. The combined ELM/modBIP method is markedly more accurate than ELM/BIP in numerical simulations. Ample numerical experiments are presented with shallow and deep neural networks for function approximation and boundary/initial value problems with partial differential equations. They demonstrate that the combined ELM/modBIP method produces highly accurate simulation results, and that its accuracy is insensitive to the random-coefficient initializations in the neural network. This is in sharp contrast with the ELM results without pre-training of the random coefficients.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا