ترغب بنشر مسار تعليمي؟ اضغط هنا

Quantized bounding volume hierarchies for neighbor search in molecular simulations on graphics processing units

173   0   0.0 ( 0 )
 نشر من قبل Michael Howard
 تاريخ النشر 2019
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We present an algorithm for neighbor search in molecular simulations on graphics processing units (GPUs) based on bounding volume hierarchies (BVHs). The BVH is compressed into a low-precision, quantized representation to increase the BVH traversal speed compared to a previous implementation. We find that neighbor search using the quantized BVH is roughly two to four times faster than current state-of-the-art methods using uniform grids (cell lists) for a suite of benchmarks for common molecular simulation models. Based on the benchmark results, we recommend using the BVH instead of a single cell list for neighbor list generation in molecular simulations on GPUs.



قيم البحث

اقرأ أيضاً

The Kernel Polynomial Method (KPM) is one of the fast diagonalization methods used for simulations of quantum systems in research fields of condensed matter physics and chemistry. The algorithm has a difficulty to be parallelized on a cluster compute r or a supercomputer due to the fine-gain recursive calculations. This paper proposes an implementation of the KPM on the recent graphics processing units (GPU) where the recursive calculations are able to be parallelized in the massively parallel environment. This paper also illustrates performance evaluations regarding the cases when the actual simulation parameters are applied, the one for increased intensive calculations and the one for increased amount of memory usage. Finally, it concludes that the performance on GPU promises very high performance compared to the one on CPU and reduces the overall simulation time.
In this paper, we develop a highly efficient molecular dynamics code fully implemented on graphics processing units for thermal conductivity calculations using the Green-Kubo formula. We compare two different schemes for force evaluation, a previousl y used thread-scheme where a single thread is used for one particle and each thread calculates the total force for the corresponding particle, and a new block-scheme where a whole block is used for one particle and each thread in the block calculates one or several pair forces between the particle associated with the given block and its neighbor particle(s) associated with the given thread. For both schemes, two different classical potentials, namely, the Lennard-Jones potential and the rigid-ion potential are implemented. While the thread-scheme performs a little better for relatively large systems, the block-scheme performs much better for relatively small systems. The relative performance of the block-scheme over the thread-scheme also increases with the increasing of the cutoff radius. We validate the implementation by calculating lattice thermal conductivities of solid argon and lead telluride.
120 - Ji Xu , Ying Ren , Wei Ge 2010
Molecular dynamics (MD) simulation is a powerful computational tool to study the behavior of macromolecular systems. But many simulations of this field are limited in spatial or temporal scale by the available computational resource. In recent years, graphics processing unit (GPU) provides unprecedented computational power for scientific applications. Many MD algorithms suit with the multithread nature of GPU. In this paper, MD algorithms for macromolecular systems that run entirely on GPU are presented. Compared to the MD simulation with free software GROMACS on a single CPU core, our codes achieve about 10 times speed-up on a single GPU. For validation, we have performed MD simulations of polymer crystallization on GPU, and the results observed perfectly agree with computations on CPU. Therefore, our single GPU codes have already provided an inexpensive alternative for macromolecular simulations on traditional CPU clusters and they can also be used as a basis to develop parallel GPU programs to further speedup the computations.
269 - D. Wysocki 2019
Gravitational wave Bayesian parameter inference involves repeated comparisons of GW data to generic candidate predictions. Even with algorithmically efficient methods like RIFT or reduced-order quadrature, the time needed to perform these calculation s and overall computational cost can be significant compared to the minutes to hours needed to achieve the goals of low-latency multimessenger astronomy. By translating some elements of the RIFT algorithm to operate on graphics processing units (GPU), we demonstrate substantial performance improvements, enabling dramatically reduced overall cost and latency.
The investigation of samples with a spatial resolution in the nanometer range relies on the precise and stable positioning of the sample. Due to inherent mechanical instabilities of typical sample stages in optical microscopes, it is usually required to control and/or monitor the sample position during the acquisition. The tracking of sparsely distributed fiducial markers at high speed allows stabilizing the sample position at millisecond time scales. For this purpose, we present a scalable fitting algorithm with significantly improved performance for two-dimensional Gaussian fits as compared to Gpufit.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا