ترغب بنشر مسار تعليمي؟ اضغط هنا

More efficient formulas for efficiency correction of cumulants and effect of using averaged efficiency

91   0   0.0 ( 0 )
 نشر من قبل Toshihiro Nonaka
 تاريخ النشر 2017
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We derive formulas for the efficiency correction of cumulants with many efficiency bins. The derivation of the formulas is simpler than the previously suggested method, but the numerical cost is drastically reduced from the naive method. From analytical and numerical analyses in simple toy models, we show that the use of the averaged efficiency in the efficiency correction can lead to wrong corrected values, which have larger deviation for higher order cumulants. These analyses show the importance of carrying out the efficiency correction without taking the average.



قيم البحث

اقرأ أيضاً

We propose a method to remove the contributions of pileup events from higher-order cumulants and moments of event-by-event particle distributions. Assuming that the pileup events are given by the superposition of two independent single-collision even ts, we show that the true moments in each multiplicity bin can be obtained recursively from lower multiplicity events. In the correction procedure the necessary information are only the probabilities of pileup events. Other terms are extracted from the experimental data. We demonstrate that the true cumulants can be reconstructed successfully by this method in simple models. Systematics on trigger inefficiencies and correction parameters are discussed.
The formulae for correlators K2 and K3 of a given particle observable (e.g. energy or transverse momentum) accounting for the track reconstruction efficiency are presented. Similar to the case of an ideal detector, the correlators can be expressed th rough the event-by-event fluctuation of the observable single event mean and the observable pseudo-central moments. However, on the contrary to the ideal case, this splitting does not allow for a substantial reduction of the computation time.
The Indian Scintillator Matrix for Reactor Anti-Neutrino detection - ISMRAN experiment aims to detect electron anti-neutrinos ($bar u_e$) emitted from a reactor via inverse beta decay reaction (IBD). The setup, consisting of 1 ton segmented Gadoliniu m foil wrapped plastic scintillator array, is planned for remote reactor monitoring and sterile neutrino search. The detection of prompt positron and delayed neutron from IBD will provide the signature of $bar u_e$ event in ISMRAN. The number of segments with energy deposit ($mathrm{N_{bars}}$) and sum total of these deposited energies are used as discriminants for identifying prompt positron event and delayed neutron capture event. However, a simple cut based selection of above variables leads to a low $bar u_e$ signal detection efficiency due to overlapping region of $mathrm{N_{bars}}$ and sum energy for the prompt and delayed events. Multivariate analysis (MVA) tools, employing variables suitably tuned for discrimination, can be useful in such scenarios. In this work we report the results from an application of artificial neural network -- the multilayer perceptron (MLP), particularly the Bayesian extension -- MLPBNN, to the simulated signal and background events in ISMRAN. The results from application of MLP to classify prompt positron events from delayed neutron capture events on Hydrogen, Gadolinium nuclei and also from the typical reactor $gamma$-ray and fast neutron backgrounds is reported. An enhanced efficiency of $sim$91$%$ with a background rejection of $sim$73$%$ for prompt selection and an efficiency of $sim$89$%$ with a background rejection of $sim$71$%$ for the delayed capture event, is achieved using the MLPBNN classifier for the ISMRAN experiment.
Bayesian modeling techniques enable sensitivity analyses that incorporate detailed expectations regarding future experiments. A model-based approach also allows one to evaluate inferences and predicted outcomes, by calibrating (or measuring) the cons equences incurred when certain results are reported. We present procedures for calibrating predictions of an experiments sensitivity to both continuous and discrete parameters. Using these procedures and a new Bayesian model of the $beta$-decay spectrum, we assess a high-precision $beta$-decay experiments sensitivity to the neutrino mass scale and ordering, for one assumed design scenario. We find that such an experiment could measure the electron-weighted neutrino mass within $sim40,$meV after 1 year (90$%$ credibility). Neutrino masses $>500,$meV could be measured within $approx5,$meV. Using only $beta$-decay and external reactor neutrino data, we find that next-generation $beta$-decay experiments could potentially constrain the mass ordering using a two-neutrino spectral model analysis. By calibrating mass ordering results, we identify reporting criteria that can be tuned to suppress false ordering claims. In some cases, a two-neutrino analysis can reveal that the mass ordering is inverted, an unobtainable result for the traditional one-neutrino analysis approach.
The geometric-mean method is often used to estimate the spatial resolution of a position-sensitive detector probed by tracks. It calculates the resolution solely from measured track data without using a detailed tracking simulation and without consid ering multiple Coulomb scattering effects. Two separate linear track fits are performed on the same data, one excluding and the other including the hit from the probed detector. The geometric mean of the widths of the corresponding exclusive and inclusive residual distributions for the probed detector is then taken as a measure of the intrinsic spatial resolution of the probed detector: $sigma=sqrt{sigma_{ex}cdotsigma_{in}}$. The validity of this method is examined for a range of resolutions with a stand-alone Geant4 Monte Carlo simulation that specifically takes multiple Coulomb scattering in the tracking detector materials into account. Using simulated as well as actual tracking data from a representative beam test scenario, we find that the geometric-mean method gives systematically inaccurate spatial resolution results. Good resolutions are estimated as poor and vice versa. The more the resolutions of reference detectors and probed detector differ, the larger the systematic bias. An attempt to correct this inaccuracy by statistically subtracting multiple-scattering effects from geometric-mean results leads to resolutions that are typically too optimistic by 10-50%. This supports an earlier critique of this method based on simulation studies that did not take multiple scattering into account.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا