Do you want to publish a course? Click here

A self-learning algorithm for biased molecular dynamics

113   0   0.0 ( 0 )
 Added by Gareth Tribello
 Publication date 2010
  fields Physics
and research's language is English




Ask ChatGPT about the research

A new self-learning algorithm for accelerated dynamics, reconnaissance metadynamics, is proposed that is able to work with a very large number of collective coordinates. Acceleration of the dynamics is achieved by constructing a bias potential in terms of a patchwork of one-dimensional, locally valid collective coordinates. These collective coordinates are obtained from trajectory analyses so that they adapt to any new features encountered during the simulation. We show how this methodology can be used to enhance sampling in real chemical systems citing examples both from the physics of clusters and from the biological sciences.



rate research

Read More

130 - Jiuyang Liang , Pan Tan , Yue Zhao 2021
Coulomb interaction, following an inverse-square force-law, quantifies the amount of force between two stationary, electrically charged particles. The long-range nature of Coulomb interactions poses a major challenge to molecular dynamics simulations which are major tools for problems at the nano-/micro- scale. Various algorithms aim to speed up the pairwise Coulomb interactions to a linear scaling but the poor scalability limits the size of simulated systems. Here, we conduct an efficient molecular dynamics algorithm with the random batch Ewald method on all-atom systems where the complete Fourier components in the Coulomb interaction are replaced by randomly selected mini batches. By simulating the N-body systems up to 100 million particles using 10 thousand CPU cores, we show that this algorithm furnishes O(N) complexity, almost perfect scalability and an order of magnitude faster computational speed when compared to the existing state-of-the-art algorithms. Further examinations of our algorithm on distinct systems, including pure water, micro-phase-separated electrolyte and protein solution demonstrate that the spatiotemporal information on all time and length scales investigated and thermodynamic quantities derived from our algorithm are in perfect agreement with those obtained from the existing algorithms. Therefore, our algorithm provides a breakthrough solution on scalability of computing the Coulomb interaction. It is particularly useful and cost-effective to simulate ultra-large systems, which was either impossible or very costing to conduct using existing algorithms, thus would benefit the broad community of sciences.
We propose a fast method for the calculation of short-range interactions in molecular dynamics simulations. The so-called random-batch list method is a stochastic version of the classical neighbor-list method to avoid the construction of a full Verlet list, which introduces two-level neighbor lists for each particle such that the neighboring particles are located in core and shell regions, respectively. Direct interactions are performed in the core region. For the shell zone, we employ a random batch of interacting particles to reduce the number of interaction pairs. The error estimate of the algorithm is provided. We investigate the Lennard-Jones fluid by molecular dynamics simulations, and show that this novel method can significantly accelerate the simulations with a factor of several fold without loss of the accuracy. This method is simple to implement, can be well combined with any linked cell methods to further speed up and scale up the simulation systems, and can be straightforwardly extended to other interactions such as Ewald short-range part, and thus it is promising for large-scale molecular dynamics simulations.
Molecular dynamics is one of the most commonly used approaches for studying the dynamics and statistical distributions of many physical, chemical, and biological systems using atomistic or coarse-grained models. It is often the case, however, that the interparticle forces drive motion on many time scales, and the efficiency of a calculation is limited by the choice of time step, which must be sufficiently small that the fastest force components are accurately integrated. Multiple time-stepping algorithms partially alleviate this inefficiency by assigning to each time scale an appropriately chosen step-size. However, such approaches are limited by resonance phenomena, wherein motion on the fastest time scales limits the step sizes associated with slower time scales. In atomistic models of biomolecular systems, for example, resonances limit the largest time step to around 5-6 fs. In this paper, we introduce a set of stochastic isokinetic equations of motion that are shown to be rigorously ergodic and that can be integrated using a multiple time-stepping algorithm that can be easily implemented in existing molecular dynamics codes. The technique is applied to a simple, illustrative problem and then to a more realistic system, namely, a flexible water model. Using this approach outer time steps as large as 100 fs are shown to be possible.
106 - J.-M. Caillol 2020
We present a reversible and symplectic algorithm called ROLL, for integrating the equations of motion in molecular dynamics simulations of simple fluids on a hypersphere $mathcal{S}^d$ of arbitrary dimension $d$. It is derived in the framework of geometric algebra and shown to be mathematically equivalent to algorithm RATTLE. An application to molecular dynamics simulation of the one component plasma is briefly discussed.
Atomistic or ab-initio molecular dynamics simulations are widely used to predict thermodynamics and kinetics and relate them to molecular structure. A common approach to go beyond the time- and length-scales accessible with such computationally expensive simulations is the definition of coarse-grained molecular models. Existing coarse-graining approaches define an effective interaction potential to match defined properties of high-resolution models or experimental data. In this paper, we reformulate coarse-graining as a supervised machine learning problem. We use statistical learning theory to decompose the coarse-graining error and cross-validation to select and compare the performance of different models. We introduce CGnets, a deep learning approach, that learns coarse-grained free energy functions and can be trained by a force matching scheme. CGnets maintain all physically relevant invariances and allow one to incorporate prior physics knowledge to avoid sampling of unphysical structures. We show that CGnets can capture all-atom explicit-solvent free energy surfaces with models using only a few coarse-grained beads and no solvent, while classical coarse-graining methods fail to capture crucial features of the free energy surface. Thus, CGnets are able to capture multi-body terms that emerge from the dimensionality reduction.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا