Do you want to publish a course? Click here

A survey of machine learning-based physics event generation

406   0   0.0 ( 0 )
 Added by Wally Melnitchouk
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Event generators in high-energy nuclear and particle physics play an important role in facilitating studies of particle reactions. We survey the state-of-the-art of machine learning (ML) efforts at building physics event generators. We review ML generative models used in ML-based event generators and their specific challenges, and discuss various approaches of incorporating physics into the ML model designs to overcome these challenges. Finally, we explore some open questions related to super-resolution, fidelity, and extrapolation for physics event generation based on ML technology.



rate research

Read More

We propose to replace the exact amplitudes used in MC event generators for trained Machine Learning regressors, with the aim of speeding up the evaluation of {it slow} amplitudes. As a proof of concept, we study the process $gg to ZZ$ whose LO amplitude is loop induced. We show that gradient boosting machines like $texttt{XGBoost}$ can predict the fully differential distributions with errors below $0.1 %$, and with prediction times $mathcal{O}(10^3)$ faster than the evaluation of the exact function. This is achieved with training times $sim 7$ minutes and regressors of size $lesssim 30$~Mb. These results suggest a possible new avenue to speed up MC event generators.
Machine learning has been applied to several problems in particle physics research, beginning with applications to high-level physics analysis in the 1990s and 2000s, followed by an explosion of applications in particle and event identification and reconstruction in the 2010s. In this document we discuss promising future research and development areas for machine learning in particle physics. We detail a roadmap for their implementation, software and hardware resource requirements, collaborative initiatives with the data science community, academia and industry, and training the particle physics community in data science. The main objective of the document is to connect and motivate these areas of research and development with the physics drivers of the High-Luminosity Large Hadron Collider and future neutrino experiments and identify the resource needs for their implementation. Additionally we identify areas where collaboration with external communities will be of great benefit.
73 - Satish Karra , Bulbul Ahmmed , 2021
Physics-informed Machine Learning has recently become attractive for learning physical parameters and features from simulation and observation data. However, most existing methods do not ensure that the physics, such as balance laws (e.g., mass, momentum, energy conservation), are constrained. Some recent works (e.g., physics-informed neural networks) softly enforce physics constraints by including partial differential equation (PDE)-based loss functions but need re-discretization of the PDEs using auto-differentiation. Training these neural nets on observational data showed that one could solve forward and inverse problems in one shot. They evaluate the state variables and the parameters in a PDE. This re-discretization of PDEs is not necessarily an attractive option for domain scientists that work with physics-based codes that have been developed for decades with sophisticated discretization techniques to solve complex process models and advanced equations of state. This paper proposes a physics constrained machine learning framework, AdjointNet, allowing domain scientists to embed their physics code in neural network training workflows. This embedding ensures that physics is constrained everywhere in the domain. Additionally, the mathematical properties such as consistency, stability, and convergence vital to the numerical solution of a PDE are still satisfied. We show that the proposed AdjointNet framework can be used for parameter estimation (and uncertainty quantification by extension) and experimental design using active learning. The applicability of our framework is demonstrated for four flow cases. Results show that AdjointNet-based inversion can estimate process model parameters with reasonable accuracy. These examples demonstrate the applicability of using existing software with no changes in source code to perform accurate and reliable inversion of model parameters.
We investigate a new structure for machine learning classifiers applied to problems in high-energy physics by expanding the inputs to include not only measured features but also physics parameters. The physics parameters represent a smoothly varying learning task, and the resulting parameterized classifier can smoothly interpolate between them and replace sets of classifiers trained at individual values. This simplifies the training process and gives improved performance at intermediate values, even for complex problems requiring deep learning. Applications include tools parameterized in terms of theoretical model parameters, such as the mass of a particle, which allow for a single network to provide improved discrimination across a range of masses. This concept is simple to implement and allows for optimized interpolatable results.
Classical and exceptional Lie algebras and their representations are among the most important tools in the analysis of symmetry in physical systems. In this letter we show how the computation of tensor products and branching rules of irreducible representations are machine-learnable, and can achieve relative speed-ups of orders of magnitude in comparison to the non-ML algorithms.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا