ترغب بنشر مسار تعليمي؟ اضغط هنا

Efficient Discrete-Event Based Particle Tracking Simulation for High Energy Physics

156   0   0.0 ( 0 )
 نشر من قبل Lucio Santi
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

This work presents novel discrete event-based simulation algorithms based on the Quantized State System (QSS) numerical methods. QSS provides attractive features for particle transportation processes, in particular a very efficient handling of discontinuities in the simulation of continuous systems. We focus on High Energy Physics (HEP) particle tracking applications that typically rely on discrete time-based methods, and study the advantages of adopting a discrete event-based numerical approach that resolves efficiently the crossing of geometry boundaries by a traveling particle. For this purpose we follow two complementary strategies. First, a new co-simulation technique connects the Geant4 simulation toolkit with a standalone QSS solver. Second, a new native QSS numerical stepper is embedded into Geant4. We compare both approaches against the latest Geant4 default steppers in different HEP setups, including a complex real scenario (the CMS particle detector at CERN). Our techniques achieve relevant simulation speedups in a wide range of scenarios, particularly when the intensity of discrete-event handling dominates performance in the solving of the continuous laws of particle motion.



قيم البحث

اقرأ أيضاً

Monte Carlo event generators (MCEGs) are the indispensable workhorses of particle physics, bridging the gap between theoretical ideas and first-principles calculations on the one hand, and the complex detector signatures and data of the experimental community on the other hand. All collider physics experiments are dependent on simulated events by MCEG codes such as Herwig, Pythia, Sherpa, POWHEG, and MG5_aMC@NLO to design and tune their detectors and analysis strategies. The development of MCEGs is overwhelmingly driven by a vibrant community of academics at European Universities, who also train the next generations of particle phenomenologists. The new challenges posed by possible future collider-based experiments and the fact that the first analyses at Run II of the LHC are now frequently limited by theory uncertainties urge the community to invest into further theoretical and technical improvements of these essential tools. In this short contribution to the European Strategy Update, we briefly review the state of the art, and the further developments that will be needed to meet the challenges of the next generation.
163 - Tsunehiko N. Kato 2013
When a charged particle moves through a plasma at a speed much higher than the thermal velocity of the plasma, it is subjected to the force of the electrostatic field induced in the plasma by itself and loses its energy. This process is well-known as the stopping power of a plasma. In this paper we show that the same process works in particle-in-cell (PIC) simulations as well and the energy loss rate of fast particles due to this process is mainly determined by the number of plasma electrons contained in the electron skin depth volume. However, since there are generally very few particles in that volume in PIC simulations compared with real plasmas, the energy loss effect can be exaggerated significantly and can affect the results. Therefore, especially for the simulations that investigate the particle acceleration processes, the number of particles used in the simulations should be chosen large enough to avoid this artificial energy loss.
Our predictions for particle physics processes are realized in a chain of complex simulators. They allow us to generate high-fidelity simulated data, but they are not well-suited for inference on the theory parameters with observed data. We explain w hy the likelihood function of high-dimensional LHC data cannot be explicitly evaluated, why this matters for data analysis, and reframe what the field has traditionally done to circumvent this problem. We then review new simulation-based inference methods that let us directly analyze high-dimensional data by combining machine learning techniques and information from the simulator. Initial studies indicate that these techniques have the potential to substantially improve the precision of LHC measurements. Finally, we discuss probabilistic programming, an emerging paradigm that lets us extend inference to the latent process of the simulator.
We present MadFlow, a first general multi-purpose framework for Monte Carlo (MC) event simulation of particle physics processes designed to take full advantage of hardware accelerators, in particular, graphics processing units (GPUs). The automation process of generating all the required components for MC simulation of a generic physics process and its deployment on hardware accelerator is still a big challenge nowadays. In order to solve this challenge, we design a workflow and code library which provides to the user the possibility to simulate custom processes through the MadGraph5_aMC@NLO framework and a plugin for the generation and exporting of specialized code in a GPU-like format. The exported code includes analytic expressions for matrix elements and phase space. The simulation is performed using the VegasFlow and PDFFlow libraries which deploy automatically the full simulation on systems with different hardware acceleration capabilities, such as multi-threading CPU, single-GPU and multi-GPU setups. The package also provides an asynchronous unweighted events procedure to store simulation results. Crucially, although only Leading Order is automatized, the library provides all ingredients necessary to build full complex Monte Carlo simulators in a modern, extensible and maintainable way. We show simulation results at leading-order for multiple processes on different hardware configurations.
We present a shock capturing method for large-eddy simulation of turbulent flows. The proposed method relies on physical mechanisms to resolve and smooth sharp unresolved flow features that may otherwise lead to numerical instability, such as shock w aves and under-resolved thermal and shear layers. To that end, we devise various sensors to detect when and where the shear viscosity, bulk viscosity and thermal conductivity of the fluid do not suffice to stabilize the numerical solution. In such cases, the fluid viscosities are selectively increased to ensure the cell Peclet number is of order one so that these flow features can be well represented with the grid resolution. Although the shock capturing method is devised in the context of discontinuous Galerkin methods, it can be used with other discretization schemes. The performance of the method is illustrated through numerical simulation of external and internal flows in transonic, supersonic, and hypersonic regimes. For the problems considered, the shock capturing method performs robustly, provides sharp shock profiles, and has a small impact on the resolved turbulent structures. These three features are critical to enable robust and accurate large-eddy simulations of shock flows.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا