Do you want to publish a course? Click here

Pile-Up Mitigation using Attention

63   0   0.0 ( 0 )
 Added by Benedikt Maier
 Publication date 2021
  fields Physics
and research's language is English




Ask ChatGPT about the research

Particle production from secondary proton-proton collisions, commonly referred to as pile-up, impair the sensitivity of both new physics searches and precision measurements at LHC experiments. We propose a novel algorithm, PUMA, for identifying pile-up objects with the help of deep neural networks based on sparse transformers. These attention mechanisms were developed for natural language processing but have become popular in other applications. In a realistic detector simulation, our method outperforms classical benchmark algorithms for pile-up mitigation in key observables. It provides a perspective for mitigating the effects of pile-up in the high luminosity era of the LHC, where up to 200 proton-proton collisions are expected to occur simultaneously.



rate research

Read More

With Skipper-CCD detectors it is possible to take multiple samples of the charge packet collected on each pixel. After averaging the samples, the noise can be extremely reduced allowing the exact counting of electrons per pixel. In this work we present an analog circuit that, with a minimum number of components, applies a double slope integration (DSI), and at the same time, it averages the multiple samples producing at its output the pixel value with sub-electron noise. For this prupose, we introduce the technique of using the DSI integrator capacitor to add the skipper samples. An experimental verification using discrete components is presented, together with an analysis of its noise sources and limitations. After averaging 400 samples it was possible reach a readout noise of 0.2,$e^-_{RMS}/pix$, comparable to other available readout systems. Due to its simplicity and significant reduction of the sampling requirements, this circuit technique is of particular interest in particle experiments and cameras with a high density of Skipper-CCDs.
Precise characterization of detector time resolution is of crucial importance for next-generation cryogenic-bolometer experiments searching for neutrinoless double-beta decay, such as CUPID, in order to reject background due to pile-up of two-neutrino double-beta decay events. In this paper, we describe a technique developed to study the pile-up rejection capability of cryogenic bolometers. Our approach, which consists of producing controlled pile-up events with a programmable waveform generator, has the benefit that we can reliably and reproducibly control the time separation and relative energy of the individual components of the generated pile-up events. The resulting data allow us to optimize and benchmark analysis strategies to discriminate between individual and pile-up pulses. We describe a test of this technique performed with a small array of detectors at the Laboratori Nazionali del Gran Sasso, in Italy; we obtain a 90% rejection efficiency against pulser-generated pile-up events with rise time of ~15ms down to time separation between the individual events of about 2ms.
Xe{136} is used as the target medium for many experiments searching for bbnonu. Despite underground operation, cosmic muons that reach the laboratory can produce spallation neutrons causing activation of detector materials. A potential background that is difficult to veto using muon tagging comes in the form of Xe{137} created by the capture of neutrons on Xe{136}. This isotope decays via beta decay with a half-life of 3.8 minutes and a Qb of $sim$4.16 MeV. This work proposes and explores the concept of adding a small percentage of He{3} to xenon as a means to capture thermal neutrons and reduce the number of activations in the detector volume. When using this technique we find the contamination from Xe{137} activation can be reduced to negligible levels in tonne and multi-tonne scale high pressure gas xenon neutrinoless double beta decay experiments running at any depth in an underground laboratory.
73 - Steven Lantz 2020
One of the most computationally challenging problems expected for the High-Luminosity Large Hadron Collider (HL-LHC) is determining the trajectory of charged particles during event reconstruction. Algorithms used at the LHC today rely on Kalman filtering, which builds physical trajectories incrementally while incorporating material effects and error estimation. Recognizing the need for faster computational throughput, we have adapted Kalman-filter-based methods for highly parallel, many-core SIMD architectures that are now prevalent in high-performance hardware. In this paper, we discuss the design and performance of the improved tracking algorithm, referred to as mkFit. A key piece of the algorithm is the Matriplex library, containing dedicated code to optimally vectorize operations on small matrices. The physics performance of the mkFit algorithm is comparable to the nominal CMS tracking algorithm when reconstructing tracks from simulated proton-proton collisions within the CMS detector. We study the scaling of the algorithm as a function of the parallel resources utilized and find large speedups both from vectorization and multi-threading. mkFit achieves a speedup of a factor of 6 compared to the nominal algorithm when run in a single-threaded application within the CMS software framework.
Silicon drift detectors (SDDs) revolutionized spectroscopy in fields as diverse as geology and dentistry. For a subset of experiments at ultra-fast, x-ray free-electron lasers (FELs), SDDs can make substantial contributions. Often the unknown spectrum is interesting, carrying science data, or the background measurement is useful to identify unexpected signals. Many measurements involve only several discrete photon energies known a priori, allowing single event decomposition of pile-up and spectroscopic photon counting. We designed a pulse function and demonstrated that the signal amplitude and rise time are obtained for each pulse by fitting, thus removing the need for pulse shaping. By avoiding pulse shaping, rise times of tens of nanoseconds resulted in reduced pulse pile-up and allowed decomposition of remaining pulse pile-up at photon separation times down to hundreds of nanoseconds while yielding time-of-arrival information with precision of 10 nanoseconds. Waveform fitting yields simultaneously high energy resolution and high counting rates (2 orders of magnitude higher than current digital pulse processors). We showed that pile-up spectrum fitting is relatively simple and preferable to pile-up spectrum deconvolution. We developed a photon pile-up statistical model for constant intensity sources, extended it to variable intensity sources (typical for FELs) and used it to fit a complex pile-up spectrum. We subsequently developed a Bayesian pile-up decomposition method that allows decomposing pile-up of single events with up to 6 photons from 6 monochromatic lines with 99% accuracy. The usefulness of SDDs will continue into the x-ray FEL era of science. Their successors, the ePixS hybrid pixel detectors, already offer hundreds of pixels, each with similar performance to an SDD, in a compact, robust and affordable package
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا