ترغب بنشر مسار تعليمي؟ اضغط هنا

Recovery of Saturated $gamma$ Signal Waveforms by Artificial Neural Networks

69   0   0.0 ( 0 )
 نشر من قبل Yu Liu
 تاريخ النشر 2018
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Particle may sometimes have energy outside the range of radiation detection hardware so that the signal is saturated and useful information is lost. We have therefore investigated the possibility of using an Artificial Neural Network (ANN) to restore the saturated waveforms of $gamma$ signals. Several ANNs were tested, namely the Back Propagation (BP), Simple Recurrent (Elman), Radical Basis Function (RBF) and Generalized Radial Basis Function (GRBF) neural networks (NNs) and compared with the fitting method based on the Marrone model. The GBRFNN was found to perform best.

قيم البحث

اقرأ أيضاً

Extracting relevant properties of empirical signals generated by nonlinear, stochastic, and high-dimensional systems is a challenge of complex systems research. Open questions are how to differentiate chaotic signals from stochastic ones, and how to quantify nonlinear and/or high-order temporal correlations. Here we propose a new technique to reliably address both problems. Our approach follows two steps: first, we train an artificial neural network (ANN) with flicker (colored) noise to predict the value of the parameter, $alpha$, that determines the strength of the correlation of the noise. To predict $alpha$ the ANN input features are a set of probabilities that are extracted from the time series by using symbolic ordinal analysis. Then, we input to the trained ANN the probabilities extracted from the time series of interest, and analyze the ANN output. We find that the $alpha$ value returned by the ANN is informative of the temporal correlations present in the time series. To distinguish between stochastic and chaotic signals, we exploit the fact that the difference between the permutation entropy (PE) of a given time series and the PE of flicker noise with the same $alpha$ parameter is small when the time series is stochastic, but it is large when the time series is chaotic. We validate our technique by analysing synthetic and empirical time series whose nature is well established. We also demonstrate the robustness of our approach with respect to the length of the time series and to the level of noise. We expect that our algorithm, which is freely available, will be very useful to the community.
The identification of jets and their constituents is one of the key problems and challenging task in heavy ion experiments such as experiments at RHIC and LHC. The presence of huge background of soft particles pose a curse for jet finding techniques. The inabilities or lack of efficient techniques to filter out the background lead to a fake or combinatorial jet formation which may have an errorneous interpretation. In this article, we present Graph Reduction technique (GraphRed), a novel class of physics-aware and topology-based attention graph neural network built upon jet physics in heavy ion collisions. This approach directly works with the physical observables of variable-length set of final state particles on an event-by-event basis to find most likely jet-induced particles in an event. This technique demonstrate the robustness and applicability of this method for finding jet-induced particles and show that graph architectures are more efficient than previous frameworks. This technique exhibit foremost time a classifier working on particle-level in each heavy ion event produced at the LHC. We present the applicability and integration of the model with current jet finding algorithms such as FastJet.
The geometric-mean method is often used to estimate the spatial resolution of a position-sensitive detector probed by tracks. It calculates the resolution solely from measured track data without using a detailed tracking simulation and without consid ering multiple Coulomb scattering effects. Two separate linear track fits are performed on the same data, one excluding and the other including the hit from the probed detector. The geometric mean of the widths of the corresponding exclusive and inclusive residual distributions for the probed detector is then taken as a measure of the intrinsic spatial resolution of the probed detector: $sigma=sqrt{sigma_{ex}cdotsigma_{in}}$. The validity of this method is examined for a range of resolutions with a stand-alone Geant4 Monte Carlo simulation that specifically takes multiple Coulomb scattering in the tracking detector materials into account. Using simulated as well as actual tracking data from a representative beam test scenario, we find that the geometric-mean method gives systematically inaccurate spatial resolution results. Good resolutions are estimated as poor and vice versa. The more the resolutions of reference detectors and probed detector differ, the larger the systematic bias. An attempt to correct this inaccuracy by statistically subtracting multiple-scattering effects from geometric-mean results leads to resolutions that are typically too optimistic by 10-50%. This supports an earlier critique of this method based on simulation studies that did not take multiple scattering into account.
GELATIO is a new software framework for advanced data analysis and digital signal processing developed for the GERDA neutrinoless double beta decay experiment. The framework is tailored to handle the full analysis flow of signals recorded by high pur ity Ge detectors and photo-multipliers from the veto counters. It is designed to support a multi-channel modular and flexible analysis, widely customizable by the user either via human-readable initialization files or via a graphical interface. The framework organizes the data into a multi-level structure, from the raw data up to the condensed analysis parameters, and includes tools and utilities to handle the data stream between the different levels. GELATIO is implemented in C++. It relies upon ROOT and its extension TAM, which provides compatibility with PROOF, enabling the software to run in parallel on clusters of computers or many-core machines. It was tested on different platforms and benchmarked in several GERDA-related applications. A stable version is presently available for the GERDA Collaboration and it is used to provide the reference analysis of the experiment data.
87 - I. Narsky 2005
An algorithm for optimization of signal significance or any other classification figure of merit suited for analysis of high energy physics (HEP) data is described. This algorithm trains decision trees on many bootstrap replicas of training data with each tree required to optimize the signal significance or any other chosen figure of merit. New data are then classified by a simple majority vote of the built trees. The performance of this algorithm has been studied using a search for the radiative leptonic decay B->gamma l nu at BaBar and shown to be superior to that of all other attempted classifiers including such powerful methods as boosted decision trees. In the B->gamma e nu channel, the described algorithm increases the expected signal significance from 2.4 sigma obtained by an original method designed for the B->gamma l nu analysis to 3.0 sigma.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا