No Arabic abstract
This lecture aims at providing a users perspective on the main concepts used nowadays for the implementation of numerical algorithm on common computing architecture. In particular, the concepts and applications of Central Processing Units (CPUs), vectorisation, multithreading, hyperthreading and Graphical Processing Units (GPUs), as well as computer clusters and grid computing will be discussed. Few examples of source codes illustrating the usage of these technologies are provided.
Applications that exploit the architectural details of high-performance computing (HPC) systems have become increasingly invaluable in academia and industry over the past two decades. The most important hardware development of the last decade in HPC has been the General Purpose Graphics Processing Unit (GPGPU), a class of massively parallel devices that now contributes the majority of computational power in the top 500 supercomputers. As these systems grow, small costs such as latency---due to the fixed cost of memory accesses and communication---accumulate in a large simulation and become a significant barrier to performance. The swept time-space decomposition rule is a communication-avoiding technique for time-stepping stencil update formulas that attempts to reduce latency costs. This work extends the swept rule by targeting heterogeneous, CPU/GPU architectures representing current and future HPC systems. We compare our approach to a naive decomposition scheme with two test equations using an MPI+CUDA pattern on 40 processes over two nodes containing one GPU. The swept rule produces a factor of 1.9 to 23 speedup for the heat equation and a factor of 1.1 to 2.0 speedup for the Euler equations, using the same processors and work distribution, and with the best possible configurations. These results show the potential effectiveness of the swept rule for different equations and numerical schemes on massively parallel computing systems that incur substantial latency costs.
A large multitude of scientific computing tools is available today. This article gives an overview of available tools and explains the main application fields. In addition basic principles of number representations in computing and the resulting truncation errors are treated. The selection of tools is for those students, who work in the field of accelerator beam dynamics.
The nonlinear space-charge effects play an important role in high intensity/high brightness accelerators. These effects can be self-consistently studied using multi-particle simulations. In this lecture, we will discuss the particle-in-cell method and the symplectic tracking model for self-consistent multi-particle simulations.
Xgrid is the first distributed computing architecture built into a desktop operating system. It allows you to run a single job across multiple computers at once. All you need is at least one Macintosh computer running Mac OS X v10.4 or later. (Mac OS X Server is not required.) We provide explicit instructions and example code to get you started, including examples of how to distribute your computing jobs, even if your initial cluster consists of just two old laptops in your basement.
One of the major challenges in using distributed learning to train complicated models with large data sets is to deal with stragglers effect. As a solution, coded computation has been recently proposed to efficiently add redundancy to the computation tasks. In this technique, coding is used across data sets, and computation is done over coded data, such that the results of an arbitrary subset of worker nodes with a certain size are enough to recover the final results. The major challenges with those approaches are (1) they are limited to polynomial function computations, (2) the size of the subset of servers that we need to wait for grows with the multiplication of the size of the data set and the model complexity (the degree of the polynomial), which can be prohibitively large, (3) they are not numerically stable for computation over real numbers. In this paper, we propose Berrut Approximated Coded Computing (BACC), as an alternative approach, which is not limited to polynomial function computation. In addition, the master node can approximately calculate the final results, using the outcomes of any arbitrary subset of available worker nodes. The approximation approach is proven to be numerically stable with low computational complexity. In addition, the accuracy of the approximation is established theoretically and verified by simulation results in different settings such as distributed learning problems. In particular, BACC is used to train a deep neural network on a cluster of servers, which outperforms repetitive computation (repetition coding) in terms of the rate of convergence.