ترغب بنشر مسار تعليمي؟ اضغط هنا

Direct $N$-body code on low-power embedded ARM GPUs

66   0   0.0 ( 0 )
 نشر من قبل David Goz Dr.
 تاريخ النشر 2019
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

This work arises on the environment of the ExaNeSt project aiming at design and development of an exascale ready supercomputer with low energy consumption profile but able to support the most demanding scientific and technical applications. The ExaNeSt compute unit consists of densely-packed low-power 64-bit ARM processors, embedded within Xilinx FPGA SoCs. SoC boards are heterogeneous architecture where computing power is supplied both by CPUs and GPUs, and are emerging as a possible low-power and low-cost alternative to clusters based on traditional CPUs. A state-of-the-art direct $N$-body code suitable for astrophysical simulations has been re-engineered in order to exploit SoC heterogeneous platforms based on ARM CPUs and embedded GPUs. Performance tests show that embedded GPUs can be effectively used to accelerate real-life scientific calculations, and that are promising also because of their energy efficiency, which is a crucial design in future exascale platforms.


قيم البحث

اقرأ أيضاً

Commercial graphics processors (GPUs) have high compute capacity at very low cost, which makes them attractive for general purpose scientific computing. In this paper we show how graphics processors can be used for N-body simulations to obtain improv ements in performance over current generation CPUs. We have developed a highly optimized algorithm for performing the O(N^2) force calculations that constitute the major part of stellar and molecular dynamics simulations. In some of the calculations, we achieve sustained performance of nearly 100 GFlops on an ATI X1900XTX. The performance on GPUs is comparable to specialized processors such as GRAPE-6A and MDGRAPE-3, but at a fraction of the cost. Furthermore, the wide availability of GPUs has significant implications for cluster computing and distributed computing efforts like Folding@Home.
The aim of this work is to quantitatively evaluate the impact of computation on the energy consumption on ARM MPSoC platforms, exploiting CPUs, embedded GPUs and FPGAs. One of them possibly represents the future of High Performance Computing systems: a prototype of an Exascale supercomputer. Performance and energy measurements are made using a state-of-the-art direct $N$-body code from the astrophysical domain. We provide a comparison of the time-to-solution and energy delay product metrics, for different software configurations. We have shown that FPGA technologies can be used for application kernel acceleration and are emerging as a promising alternative to traditional technologies for HPC, which purely focus on peak-performance than on power-efficiency.
The Code O-SUKI-N 3D is an upgraded version of the 2D Code O-SUKI (Comput. Phys. Commun. 240, 83 (2019)). Code O-SUKI-N 3D is an integrated 3-dimensional (3D) simulation program system for fuel implosion, ignition and burning of a direct-drive nuclea r-fusion pellet in heavy ion beam (HIB) inertial confinement fusion (HIF).The Code O-SUKI-N 3D consists of the three programs of Lagrangian fluid implosion program, data conversion program, and Euler fluid implosion, ignition and burning program. The Code O-SUKI-N 3D can also couple with the HIB illumination and energy deposition program of OK3 (Comput. Phys. Commun. 181, 1332 (2010)). The spherical target implosion 3D behavior is computed by the 3D Lagrangian fluid code until the time just before the void closure of the fuel implosion. After that, all the data by the Lagrangian implosion code are converted to the data for the 3D Eulerian code. In the 3D Euler code, the DT fuel compression at the stagnation, ignition and burning are computed. The Code O-SUKI-N 3D simulation system provides a capability to compute and to study the HIF target implosion dynamics.
We present a new parallel code for computing the dynamical evolution of collisional N-body systems with up to N~10^7 particles. Our code is based on the the Henon Monte Carlo method for solving the Fokker-Planck equation, and makes assumptions of sph erical symmetry and dynamical equilibrium. The principal algorithmic developments involve optimizing data structures, and the introduction of a parallel random number generation scheme, as well as a parallel sorting algorithm, required to find nearest neighbors for interactions and to compute the gravitational potential. The new algorithms we introduce along with our choice of decomposition scheme minimize communication costs and ensure optimal distribution of data and workload among the processing units. The implementation uses the Message Passing Interface (MPI) library for communication, which makes it portable to many different supercomputing architectures. We validate the code by calculating the evolution of clusters with initial Plummer distribution functions up to core collapse with the number of stars, N, spanning three orders of magnitude, from 10^5 to 10^7. We find that our results are in good agreement with self-similar core-collapse solutions, and the core collapse times generally agree with expectations from the literature. Also, we observe good total energy conservation, within less than 0.04% throughout all simulations. We analyze the performance of the code, and demonstrate near-linear scaling of the runtime with the number of processors up to 64 processors for N=10^5, 128 for N=10^6 and 256 for N=10^7. The runtime reaches a saturation with the addition of more processors beyond these limits which is a characteristic of the parallel sorting algorithm. The resulting maximum speedups we achieve are approximately 60x, 100x, and 220x, respectively.
The numerical simulations of massive collisional stellar systems, such as globular clusters (GCs), are very time-consuming. Until now, only a few realistic million-body simulations of GCs with a small fraction of binaries (5%) have been performed by using the NBODY6++GPU code. Such models took half a year computational time on a GPU based super-computer. In this work, we develop a new N-body code, PeTar, by combining the methods of Barnes-Hut tree, Hermite integrator and slow-down algorithmic regularization (SDAR). The code can accurately handle an arbitrary fraction of multiple systems (e.g. binaries, triples) while keeping a high performance by using the hybrid parallelization methods with MPI, OpenMP, SIMD instructions and GPU. A few benchmarks indicate that PeTar and NBODY6++GPU have a very good agreement on the long-term evolution of the global structure, binary orbits and escapers. On a highly configured GPU desktop computer, the performance of a million-body simulation with all stars in binaries by using PeTar is 11 times faster than that of NBODY6++GPU. Moreover, on the Cray XC50 supercomputer, PeTar well scales when number of cores increase. The ten million-body problem, which covers the region of ultra compact dwarfs and nuclearstar clusters, becomes possible to be solved.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا