Do you want to publish a course? Click here

GPU Accelerated Atomistic Energy Barrier Calculations of Skyrmion Annihilations

65   0   0.0 ( 0 )
 Added by Paul Heistracher
 Publication date 2018
  fields Physics
and research's language is English




Ask ChatGPT about the research

We present GPU accelerated simulations to calculate the annihilation energy of magnetic skyrmions in an atomistic spin model considering dipole-dipole, exchange, uniaxial-anisotropy and Dzyaloshinskii-Moriya interactions using the simplified string method. The skyrmion annihilation energy is directly related to its thermal stability and is a key measure for the applicability of magnetic skyrmions to storage and logic devices. We investigate annihilations mediated by Bloch points as well as annihilations via boundaries for various interaction energies. Both processes show similar behaviour, with boundary annihilations resulting in slightly smaller energy barriers than Bloch point annihilations.

rate research

Read More

WarpX is a general purpose electromagnetic particle-in-cell code that was originally designed to run on many-core CPU architectures. We describe the strategy followed to allow WarpX to use the GPU-accelerated nodes on OLCFs Summit supercomputer, a strategy we believe will extend to the upcoming machines Frontier and Aurora. We summarize the challenges encountered, lessons learned, and give current performance results on a series of relevant benchmark problems.
A principally novel approach towards solving the few-particle (many-dimensional) quantum scattering problems is described. The approach is based on a complete discretization of few-particle continuum and usage of massively parallel computations of integral kernels for scattering equations by means of GPU. The discretization for continuous spectrum of a few-particle Hamiltonian is realized with a projection of all scattering operators and wave functions onto the stationary wave-packet basis. Such projection procedure leads to a replacement of singular multidimensional integral equations with linear matrix ones having finite matrix elements. Different aspects of the employment of a multithread GPU computing for fast calculation of the matrix kernel of the equation are studied in detail. As a result, the fully realistic three-body scattering problem above the break-up threshold is solved on an ordinary desktop PC with GPU for a rather small computational time.
A modern graphics processing unit (GPU) is able to perform massively parallel scientific computations at low cost. We extend our implementation of the checkerboard algorithm for the two dimensional Ising model [T. Preis et al., J. Comp. Phys. 228, 4468 (2009)] in order to overcome the memory limitations of a single GPU which enables us to simulate significantly larger systems. Using multi-spin coding techniques, we are able to accelerate simulations on a single GPU by factors up to 35 compared to an optimized single Central Processor Unit (CPU) core implementation which employs multi-spin coding. By combining the Compute Unified Device Architecture (CUDA) with the Message Parsing Interface (MPI) on the CPU level, a single Ising lattice can be updated by a cluster of GPUs in parallel. For large systems, the computation time scales nearly linearly with the number of GPUs used. As proof of concept we reproduce the critical temperature of the 2D Ising model using finite size scaling techniques.
Mesoscopic simulations of hydrocarbon flow in source shales are challenging, in part due to the heterogeneous shale pores with sizes ranging from a few nanometers to a few micrometers. Additionally, the sub-continuum fluid-fluid and fluid-solid interactions in nano- to micro-scale shale pores, which are physically and chemically sophisticated, must be captured. To address those challenges, we present a GPU-accelerated package for simulation of flow in nano- to micro-pore networks with a many-body dissipative particle dynamics (mDPD) mesoscale model. Based on a fully distributed parallel paradigm, the code offloads all intensive workloads on GPUs. Other advancements, such as smart particle packing and no-slip boundary condition in complex pore geometries, are also implemented for the construction and the simulation of the realistic shale pores from 3D nanometer-resolution stack images. Our code is validated for accuracy and compared against the CPU counterpart for speedup. In our benchmark tests, the code delivers nearly perfect strong scaling and weak scaling (with up to 512 million particles) on up to 512 K20X GPUs on Oak Ridge National Laboratorys (ORNL) Titan supercomputer. Moreover, a single-GPU benchmark on ORNLs SummitDev and IBMs AC922 suggests that the host-to-device NVLink can boost performance over PCIe by a remarkable 40%. Lastly, we demonstrate, through a flow simulation in realistic shale pores, that the CPU counterpart requires 840 Power9 cores to rival the performance delivered by our package with four V100 GPUs on ORNLs Summit architecture. This simulation package enables quick-turnaround and high-throughput mesoscopic numerical simulations for investigating complex flow phenomena in nano- to micro-porous rocks with realistic pore geometries.
We report the direct measurement of the topological skyrmion energy barrier through a hysteresis of the skyrmion lattice in the chiral magnet MnSi. Measurements were made using small-angle neutron scattering with a custom-built resistive coil to allow for high-precision minor hysteresis loops. The experimental data was analyzed using an adapted Preisach model to quantify the energy barrier for skyrmion formation and corroborated by the minimum-energy path analysis based on atomistic spin simulations. We reveal that the skyrmion lattice in MnSi forms from the conical phase progressively in small domains, each of which consisting of hundreds of skyrmions, and with an activation barrier of several eV.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا