ترغب بنشر مسار تعليمي؟ اضغط هنا

Accelerated Event-by-Event Neutrino Oscillation Reweighting with Matter Effects on a GPU

108   0   0.0 ( 0 )
 نشر من قبل Richard Calland
 تاريخ النشر 2013
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Oscillation probability calculations are becoming increasingly CPU intensive in modern neutrino oscillation analyses. The independency of reweighting individual events in a Monte Carlo sample lends itself to parallel implementation on a Graphics Processing Unit. The library Prob3++ was ported to the GPU using the CUDA C API, allowing for large scale parallelized calculations of neutrino oscillation probabilities through matter of constant density, decreasing the execution time by a factor of 75, when compared to performance on a single CPU.



قيم البحث

اقرأ أيضاً

Signal estimation in the presence of background noise is a common problem in several scientific disciplines. An On/Off measurement is performed when the background itself is not known, being estimated from a background control sample. The frequentist and Bayesian approaches for signal estimation in On/Off measurements are reviewed and compared, focusing on the weakness of the former and on the advantages of the latter in correctly addressing the Poissonian nature of the problem. In this work, we devise a novel reconstruction method, dubbed BASiL (Bayesian Analysis including Single-event Likelihoods), for estimating the signal rate based on the Bayesian formalism. It uses information on event-by-event individual parameters and their distribution for the signal and background population. Events are thereby weighted according to their likelihood of being a signal or a background event and background suppression can be achieved without performing fixed fiducial cuts. Throughout the work, we maintain a general notation, that allows to apply the method generically, and provide a performance test using real data and simulations of observations with the MAGIC telescopes, as demonstration of the performance for Cherenkov telescopes. BASiL allows to estimate the signal more precisely, avoiding loss of exposure due to signal extraction cuts. We expect its applicability to be straightforward in similar cases.
This paper discusses a parallelized event reconstruction of the COMET Phase-I experiment. The experiment aims to discover charged lepton flavor violation by observing 104.97 MeV electrons from neutrinoless muon-to-electron conversion in muonic atoms. The event reconstruction of electrons with multiple helix turns is a challenging problem because hit-to-turn classification requires a high computation cost. The introduced algorithm finds an optimal seed of position and momentum for each turn partition by investigating the residual sum of squares based on distance-of-closest-approach (DCA) between hits and a track extrapolated from the seed. Hits with DCA less than a cutoff value are classified for the turn represented by the seed. The classification performance was optimized by tuning the cutoff value and refining the set of classified hits. The workload was parallelized over the seeds and the hits by defining two GPU kernels, which record track parameters extrapolated from the seeds and finds the DCAs of hits, respectively. A reasonable efficiency and momentum resolution was obtained for a wide momentum region which covers both signal and background electrons. The event reconstruction results from the CPU and GPU were identical to each other. The benchmarked GPUs had an order of magnitude of speedup over a CPU with 16 cores while the exact speed gains varied depending on their architectures.
There is a growing use of neural network classifiers as unbinned, high-dimensional (and variable-dimensional) reweighting functions. To date, the focus has been on marginal reweighting, where a subset of features are used for reweighting while all ot her features are integrated over. There are some situations, though, where it is preferable to condition on auxiliary features instead of marginalizing over them. In this paper, we introduce neural conditional reweighting, which extends neural marginal reweighting to the conditional case. This approach is particularly relevant in high-energy physics experiments for reweighting detector effects conditioned on particle-level truth information. We leverage a custom loss function that not only allows us to achieve neural conditional reweighting through a single training procedure, but also yields sensible interpolation even in the presence of phase space holes. As a specific example, we apply neural conditional reweighting to the energy response of high-energy jets, which could be used to improve the modeling of physics objects in parametrized fast simulation packages.
We have studied the distribution of traffic flow $q$ for the Nagel-Schreckenberg model by computer simulations. We applied a large-deviation approach, which allowed us to obtain the distribution $P(q)$ over more than one hundred decades in probabilit y, down to probabilities like $10^{-140}$. This allowed us to characterize the flow distribution over a large range of the support and identify the characteristics of rare and even very rare traffic situations. We observe a change of the distribution shape when increasing the density of cars from the free flow to the congestion phase. Furthermore, we characterize typical and rare traffic situations by measuring correlations of $q$ to other quantities like density of standing cars or number and size of traffic jams.
67 - Kenan v{S}ehic 2020
In offshore engineering design, nonlinear wave models are often used to propagate stochastic waves from an input boundary to the location of an offshore structure. Each wave realization is typically characterized by a high-dimensional input time seri es, and a reliable determination of the extreme events is associated with substantial computational effort. As the sea depth decreases, extreme events become more difficult to evaluate. We here construct a low-dimensional characterization of the candidate input time series to circumvent the search for extreme wave events in a high-dimensional input probability space. Each wave input is represented by a unique low-dimensional set of parameters for which standard surrogate approximations, such as Gaussian processes, can estimate the short-term exceedance probability efficiently and accurately. We demonstrate the advantages of the new approach with a simple shallow-water wave model based on the Korteweg-de Vries equation for which we can provide an accurate reference solution based on the simple Monte Carlo method. We furthermore apply the method to a fully nonlinear wave model for wave propagation over a sloping seabed. The results demonstrate that the Gaussian process can learn accurately the tail of the heavy-tailed distribution of the maximum wave crest elevation based on only $1.7%$ of the required Monte Carlo evaluations.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا