No Arabic abstract
A large multitude of scientific computing tools is available today. This article gives an overview of available tools and explains the main application fields. In addition basic principles of number representations in computing and the resulting truncation errors are treated. The selection of tools is for those students, who work in the field of accelerator beam dynamics.
Turning the current experimental plasma accelerator state-of-the-art from a promising technology into mainstream scientific tools depends critically on high-performance, high-fidelity modeling of complex processes that develop over a wide range of space and time scales. As part of the U.S. Department of Energys Exascale Computing Project, a team from Lawrence Berkeley National Laboratory, in collaboration with teams from SLAC National Accelerator Laboratory and Lawrence Livermore National Laboratory, is developing a new plasma accelerator simulation tool that will harness the power of future exascale supercomputers for high-performance modeling of plasma accelerators. We present the various components of the codes such as the new Particle-In-Cell Scalable Application Resource (PICSAR) and the redesigned adaptive mesh refinement library AMReX, which are combined with redesigned elements of the Warp code, in the new WarpX software. The code structure, status, early examples of applications and plans are discussed.
The increasing interest in the phenomenology of the Standard Model Effective Field Theory (SMEFT), has led to the development of a wide spectrum of public codes which implement automatically different aspects of the SMEFT for phenomenological applications. In order to discuss the present and future of such efforts, the SMEFT-Tools 2019 Workshop was held at the IPPP Durham on the 12th-14th June 2019. Here we collect and summarize the contents of this workshop.
The demands of cutting-edge science are driving the need for larger and faster computing resources. With the rapidly growing scale of computing systems and the prospect of technologically disruptive architectures to meet these needs, scientists face the challenge of effectively using complex computational resources to advance scientific discovery. Multidisciplinary collaborating networks of researchers with diverse scientific backgrounds are needed to address these complex challenges. The UNEDF SciDAC collaboration of nuclear theorists, applied mathematicians, and computer scientists is developing a comprehensive description of nuclei and their reactions that delivers maximum predictive power with quantified uncertainties. This paper describes UNEDF and identifies attributes that classify it as a successful computational collaboration. We illustrate significant milestones accomplished by UNEDF through integrative solutions using the most reliable theoretical approaches, most advanced algorithms, and leadership-class computational resources.
In a region free of currents, magnetostatics can be described by the Laplace equation of a scalar magnetic potential, and one can apply the same methods commonly used in electrostatics. Here we show how to calculate the general vector field inside a real (finite) solenoid, using only the magnitude of the field along the symmetry axis. Our method does not require integration or knowledge of the current distribution, and is presented through practical examples, including a non-uniform finite solenoid used to produce cold atomic beams via laser cooling. These examples allow educators to discuss the non-trivial calculation of fields off-axis using concepts familiar to most students, while offering the opportunity to introduce important advancements of current modern research.
The scale of scientific High Performance Computing (HPC) and High Throughput Computing (HTC) has increased significantly in recent years, and is becoming sensitive to total energy use and cost. Energy-efficiency has thus become an important concern in scientific fields such as High Energy Physics (HEP). There has been a growing interest in utilizing alternate architectures, such as low power ARM processors, to replace traditional Intel x86 architectures. Nevertheless, even though such solutions have been successfully used in mobile applications with low I/O and memory demands, it is unclear if they are suitable and more energy-efficient in the scientific computing environment. Furthermore, there is a lack of tools and experience to derive and compare power consumption between the architectures for various workloads, and eventually to support software optimizations for energy efficiency. To that end, we have performed several physical and software-based measurements of workloads from HEP applications running on ARM and Intel architectures, and compare their power consumption and performance. We leverage several profiling tools (both in hardware and software) to extract different characteristics of the power use. We report the results of these measurements and the experience gained in developing a set of measurement techniques and profiling tools to accurately assess the power consumption for scientific workloads.