Do you want to publish a course? Click here

Reconstructing particle number distributions with convoluting volume fluctuations

60   0   0.0 ( 0 )
 Added by Toshihiro Nonaka
 Publication date 2020
  fields Physics
and research's language is English




Ask ChatGPT about the research

We propose methods to reconstruct particle distributions with and without considering initial volume fluctuations. This approach enables us to correct for detector efficiencies and initial volume fluctuations simultaneously. Our study suggests such a tool could investigate the possible bimodal structure of net-proton distribution in Au+Au collisions at $sqrt{s_{rm NN}}=$7.7 GeV a signature of first-order phase transition and critical point [arXiv:1804.04463,arXiv:1811.04456].



rate research

Read More

The appearance of large, none-Gaussian cumulants of the baryon number distribution is commonly discussed as a signal for the QCD critical point. We review the status of the Taylor expansion of cumulant ratios of baryon number fluctuations along the freeze-out line and also compare QCD results with the corresponding proton number fluctuations as measured by the STAR Collaboration at RHIC. To further constrain the location of a possible QCD critical point we discuss poles of the baryon number fluctuations in the complex plane. Here we use not only the Taylor coefficients obtained at zero chemical potential but perform also calculations of Taylor expansion coefficients of the pressure at purely imaginary chemical potentials.
The formulae for correlators K2 and K3 of a given particle observable (e.g. energy or transverse momentum) accounting for the track reconstruction efficiency are presented. Similar to the case of an ideal detector, the correlators can be expressed through the event-by-event fluctuation of the observable single event mean and the observable pseudo-central moments. However, on the contrary to the ideal case, this splitting does not allow for a substantial reduction of the computation time.
We derive formulas for the efficiency correction of cumulants with many efficiency bins. The derivation of the formulas is simpler than the previously suggested method, but the numerical cost is drastically reduced from the naive method. From analytical and numerical analyses in simple toy models, we show that the use of the averaged efficiency in the efficiency correction can lead to wrong corrected values, which have larger deviation for higher order cumulants. These analyses show the importance of carrying out the efficiency correction without taking the average.
Bayesian modeling techniques enable sensitivity analyses that incorporate detailed expectations regarding future experiments. A model-based approach also allows one to evaluate inferences and predicted outcomes, by calibrating (or measuring) the consequences incurred when certain results are reported. We present procedures for calibrating predictions of an experiments sensitivity to both continuous and discrete parameters. Using these procedures and a new Bayesian model of the $beta$-decay spectrum, we assess a high-precision $beta$-decay experiments sensitivity to the neutrino mass scale and ordering, for one assumed design scenario. We find that such an experiment could measure the electron-weighted neutrino mass within $sim40,$meV after 1 year (90$%$ credibility). Neutrino masses $>500,$meV could be measured within $approx5,$meV. Using only $beta$-decay and external reactor neutrino data, we find that next-generation $beta$-decay experiments could potentially constrain the mass ordering using a two-neutrino spectral model analysis. By calibrating mass ordering results, we identify reporting criteria that can be tuned to suppress false ordering claims. In some cases, a two-neutrino analysis can reveal that the mass ordering is inverted, an unobtainable result for the traditional one-neutrino analysis approach.
We propose a method to remove the contributions of pileup events from higher-order cumulants and moments of event-by-event particle distributions. Assuming that the pileup events are given by the superposition of two independent single-collision events, we show that the true moments in each multiplicity bin can be obtained recursively from lower multiplicity events. In the correction procedure the necessary information are only the probabilities of pileup events. Other terms are extracted from the experimental data. We demonstrate that the true cumulants can be reconstructed successfully by this method in simple models. Systematics on trigger inefficiencies and correction parameters are discussed.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا