ترغب بنشر مسار تعليمي؟ اضغط هنا

Using {sc top-c} for Commodity Parallel Computing in Cosmic Ray Physics Simulations

88   0   0.0 ( 0 )
 نشر من قبل Luis Anchordoqui
 تاريخ النشر 2000
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

{sc top-c} (Task Oriented Parallel C) is a freely available package for parallel computing. It is designed to be easy to learn and to have good tolerance for the high latencies that are common in commodity networks of computers. It has been successfully used in a wide range of examples, providing linear speedup with the number of computers. A brief overview of {sc top-c} is provided, along with recent experience with cosmic ray physics simulations.

قيم البحث

اقرأ أيضاً

We present an evaluation of a simulated cosmic ray shower, based on {sc geant4} and {sc top-c}, which tracks all the particles in the shower. {sc top-c} (Task Oriented Parallel C) provides a framework for parallel algorithm development which makes tr actable the problem of following each particle. This method is compared with a simulation program which employs the Hillas thinning algorithm.
The search for the origin of cosmic rays is as active as ever, mainly driven by new insights provided by recent pieces of observation. Much effort is being channelled in putting the so called supernova paradigm for the origin of galactic cosmic rays on firmer grounds, while at the highest energies we are trying to understand the observed cosmic ray spectra and mass composition and relating them to potential sources of extragalactic cosmic rays. Interestingly, a topic that has acquired a dignity of its own is the investigation of the transition region between the galactic and extragalactic components, once associated with the ankle and now increasingly thought to be taking place at somewhat lower energies. Here we summarize recent developments in the observation and understanding of galactic and extragalactic cosmic rays and we discuss the implications of such findings for the modelling of the transition between the two.
Real-time data processing is one of the central processes of particle physics experiments which require large computing resources. The LHCb (Large Hadron Collider beauty) experiment will be upgraded to cope with a particle bunch collision rate of 30 million times per second, producing $10^9$ particles/s. 40 Tbits/s need to be processed in real-time to make filtering decisions to store data. This poses a computing challenge that requires exploration of modern hardware and software solutions. We present Compass, a particle tracking algorithm and a parallel raw input decoding optimised for GPUs. It is designed for highly parallel architectures, data-oriented and optimised for fast and localised data access. Our algorithm is configurable, and we explore the trade-off in computing and physics performance of various configurations. A CPU implementation that delivers the same physics performance as our GPU implementation is presented. We discuss the achieved physics performance and validate it with Monte Carlo simulated data. We show a computing performance analysis comparing consumer and server grade GPUs, and a CPU. We show the feasibility of using a full GPU decoding and particle tracking algorithm for high-throughput particle trajectories reconstruction, where our algorithm improves the throughput up to 7.4$times$ compared to the LHCb baseline.
We review some of the recent progress in our knowledge about high-energy cosmic rays, with an emphasis on the interpretation of the different observational results. We discuss the effects that are relevant to shape the cosmic ray spectrum and the exp lanations proposed to account for its features and for the observed changes in composition. The physics of air-showers is summarized and we also present the results obtained on the proton-air cross section and on the muon content of the showers. We discuss the cosmic ray propagation through magnetic fields, the effects of diffusion and of magnetic lensing, the cosmic ray interactions with background radiation fields and the production of secondary neutrinos and photons. We also consider the cosmic ray anisotropies, both at large and small angular scales, presenting the results obtained from the TeV up to the highest energies and discuss the models proposed to explain their origin.
We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language (OpenCL) framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strat- egies are developed to obtain efficient simulations using multiple central processing units (CPUs) and GPUs.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا