Do you want to publish a course? Click here

(Machine) Learning amplitudes for faster event generation

57   0   0.0 ( 0 )
 Added by Fady Bishara
 Publication date 2019
  fields
and research's language is English




Ask ChatGPT about the research

We propose to replace the exact amplitudes used in MC event generators for trained Machine Learning regressors, with the aim of speeding up the evaluation of {it slow} amplitudes. As a proof of concept, we study the process $gg to ZZ$ whose LO amplitude is loop induced. We show that gradient boosting machines like $texttt{XGBoost}$ can predict the fully differential distributions with errors below $0.1 %$, and with prediction times $mathcal{O}(10^3)$ faster than the evaluation of the exact function. This is achieved with training times $sim 7$ minutes and regressors of size $lesssim 30$~Mb. These results suggest a possible new avenue to speed up MC event generators.



rate research

Read More

Event generators in high-energy nuclear and particle physics play an important role in facilitating studies of particle reactions. We survey the state-of-the-art of machine learning (ML) efforts at building physics event generators. We review ML generative models used in ML-based event generators and their specific challenges, and discuss various approaches of incorporating physics into the ML model designs to overcome these challenges. Finally, we explore some open questions related to super-resolution, fidelity, and extrapolation for physics event generation based on ML technology.
A wealth of new physics models which are motivated by questions such as the nature of dark matter, the origin of the neutrino masses and the baryon asymmetry in the universe, predict the existence of hidden sectors featuring new particles. Among the possibilities are heavy neutral leptons, vectors and scalars, that feebly interact with the Standard Model (SM) sector and are typically light and long lived. Such new states could be produced in high-intensity facilities, the so-called beam dump experiments, either directly in the hard interaction or as a decay product of heavier mesons. They could then decay back to the SM or to hidden sector particles, giving rise to peculiar decay or interaction signatures in a far-placed detector. Simulating such kind of events presents a challenge, as not only short-distance new physics (hard production, hadron decays, and interaction with the detector) and usual SM phenomena need to be described but also the travel has to be accounted for as determined by the geometry of the detector. In this work, we describe a new plugin to the {sc MadGraph5_aMC@NLO} platform, which allows the complete simulation of new physics processes relevant for beam dump experiments, including the various mechanisms for the production of hidden particles, namely their decays or scattering off SM particles, as well as their far detection, keeping into account spatial correlations and the geometry of the experiment.
Fluorescence lifetime imaging microscopy (FLIM) is a powerful technique in biomedical research that uses the fluorophore decay rate to provide additional contrast in fluorescence microscopy. However, at present, the calculation, analysis, and interpretation of FLIM is a complex, slow, and computationally expensive process. Machine learning (ML) techniques are well suited to extract and interpret measurements from multi-dimensional FLIM data sets with substantial improvement in speed over conventional methods. In this topical review, we first discuss the basics of FILM and ML. Second, we provide a summary of lifetime extraction strategies using ML and its applications in classifying and segmenting FILM images with higher accuracy compared to conventional methods. Finally, we discuss two potential directions to improve FLIM with ML with proof of concept demonstrations.
We present a simple method to automatically evaluate arbitrary tree-level amplitudes involving the production or decay of a heavy quark pair QQbar in a generic {2S+1}L_J^[1,8] state, i.e., the short distance coefficients appearing in the NRQCD factorization formalism. Our approach is based on extracting the relevant contributions from the open heavy quark-antiquark amplitudes through an expansion with respect to the quark-antiquark relative momentum and the application of suitable color and spin projectors. To illustrate the capabilities of the method and its implementation in MadGraph a few applications to quarkonium collider phenomenology are presented.
In this paper the current release of the Monte Carlo event generator Sherpa, version 1.1, is presented. Sherpa is a general-purpose tool for the simulation of particle collisions at high-energy colliders. It contains a very flexible tree-level matrix-element generator for the calculation of hard scattering processes within the Standard Model and various new physics models. The emission of additional QCD partons off the initial and final states is described through a parton-shower model. To consistently combine multi-parton matrix elements with the QCD parton cascades the approach of Catani, Krauss, Kuhn and Webber is employed. A simple model of multiple interactions is used to account for underlying events in hadron--hadron collisions. The fragmentation of partons into primary hadrons is described using a phenomenological cluster-hadronisation model. A comprehensive library for simulating tau-lepton and hadron decays is provided. Where available form-factor models and matrix elements are used, allowing for the inclusion of spin correlations; effects of virtual and real QED corrections are included using the approach of Yennie, Frautschi and Suura.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا