ترغب بنشر مسار تعليمي؟ اضغط هنا

Magnus integrators on multicore CPUs and GPUs

107   0   0.0 ( 0 )
 نشر من قبل Lukas Einkemmer
 تاريخ النشر 2017
والبحث باللغة English




اسأل ChatGPT حول البحث

In the present paper we consider numerical methods to solve the discrete Schrodinger equation with a time dependent Hamiltonian (motivated by problems encountered in the study of spin systems). We will consider both short-range interactions, which lead to evolution equations involving sparse matrices, and long-range interactions, which lead to dense matrices. Both of these settings show very different computational characteristics. We use Magnus integrators for time integration and employ a framework based on Leja interpolation to compute the resulting action of the matrix exponential. We consider both traditional Magnus integrators (which are extensively used for these types of problems in the literature) as well as the recently developed commutator-free Magnus integrators and implement them on modern CPU and GPU (graphics processing unit) based systems. We find that GPUs can yield a significant speed-up (up to a factor of $10$ in the dense case) for these types of problems. In the sparse case GPUs are only advantageous for large problem sizes and the achieved speed-ups are more modest. In most cases the commutator-free variant is superior but especially on the GPU this advantage is rather small. In fact, none of the advantage of commutator-free methods on GPUs (and on multi-core CPUs) is due to the elimination of commutators. This has important consequences for the design of more efficient numerical methods.



قيم البحث

اقرأ أيضاً

In addition to hardware wall-time restrictions commonly seen in high-performance computing systems, it is likely that future systems will also be constrained by energy budgets. In the present work, finite difference algorithms of varying computationa l and memory intensity are evaluated with respect to both energy efficiency and runtime on an Intel Ivy Bridge CPU node, an Intel Xeon Phi Knights Landing processor, and an NVIDIA Tesla K40c GPU. The conventional way of storing the discretised derivatives to global arrays for solution advancement is found to be inefficient in terms of energy consumption and runtime. In contrast, a class of algorithms in which the discretised derivatives are evaluated on-the-fly or stored as thread-/process-local variables (yielding high compute intensity) is optimal both with respect to energy consumption and runtime. On all three hardware architectures considered, a speed-up of ~2 and an energy saving of ~2 are observed for the high compute intensive algorithms compared to the memory intensive algorithm. The energy consumption is found to be proportional to runtime, irrespective of the power consumed and the GPU has an energy saving of ~5 compared to the same algorithm on a CPU node.
Physics event generators are essential components of the data analysis software chain of high energy physics experiments, and important consumers of their CPU resources. Improving the software performance of these packages on modern hardware architec tures, such as those deployed at HPC centers, is essential in view of the upcoming HL-LHC physics programme. In this paper, we describe an ongoing activity to reengineer the Madgraph5_aMC@NLO physics event generator, primarily to port it and allow its efficient execution on GPUs, but also to modernize it and optimize its performance on vector CPUs. We describe the motivation, engineering process and software architecture design of our developments, as well as the current challenges and future directions for this project. This paper is based on our submission to vCHEP2021 in March 2021,complemented with a few preliminary results that we presented during the conference. Further details and updated results will be given in later publications.
112 - Lucas Beyer AICES 2013
In the context of the genome-wide association studies (GWAS), one has to solve long sequences of generalized least-squares problems; such a task has two limiting factors: execution time --often in the range of days or weeks-- and data management --da ta sets in the order of Terabytes. We present an algorithm that obviates both issues. By pipelining the computation, and thanks to a sophisticated transfer strategy, we stream data from hard disk to main memory to GPUs and achieve sustained peak performance; with respect to a highly-optimized CPU implementation, our algorithm shows a speedup of 2.6x. Moreover, the approach lends itself to multiple GPUs and attains almost perfect scalability. When using 4 GPUs, we observe speedups of 9x over the aforementioned implementation, and 488x over a widespread biology library.
We present a computational scheme for orbital-free density functional theory (OFDFT) that simultaneously provides access to all-electron values and preserves the OFDFT linear scaling as a function of the system size. Using the projector augmented-wav e method (PAW) in combination with real-space methods we overcome some obstacles faced by other available implementation schemes. Specifically, the advantages of using the PAW method are two fold. First, PAW reproduces all-electron values offering freedom in adjusting the convergence parameters and the atomic setups allow tuning the numerical accuracy per element. Second, PAW can provide a solution to some of the convergence problems exhibited in other OFDFT implementations based on Kohn-Sham codes. Using PAW and real-space methods, our orbital-free results agree with the reference all-electron values with a mean absolute error of 10~meV and the number of iterations required by the self-consistent cycle is comparable to the KS method. The comparison of all-electron and pseudopotential bulk modulus and lattice constant reveal an enormous difference, demonstrating that in order to assess the performance of OFDFT functionals it is necessary to use implementations that obtain all-electron values. The proposed combination of methods is the most promising route currently available. We finally show that a parametrized kinetic energy functional can give lattice constants and bulk moduli comparable in accuracy to those obtained by the KS PBE method, exemplified with the case of diamond.
Energy is now a first-class design constraint along with performance in all computing settings. Energy predictive modelling based on performance monitoring counts (PMCs) is the leading method used for prediction of energy consumption during an applic ation execution. We use a model-theoretic approach to formulate the assumed properties of existing models in a mathematical form. We extend the formalism by adding properties, heretofore unconsidered, that account for a limited form of energy conservation law. The extended formalism defines our theory of energy of computing. By applying the basic practical implications of the theory, we improve the prediction accuracy of state-of-the-art energy models from 31% to 18%. We also demonstrate that use of state-of-the-art measurement tools for energy optimisation may lead to significant losses of energy (ranging from 56% to 65% for applications used in experiments) since they do not take into account the energy conservation properties.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا