Do you want to publish a course? Click here

Tentative sensitivity of future $0 u betabeta$-decay experiments to neutrino masses and Majorana CP phases

64   0   0.0 ( 0 )
 Added by Shun Zhou
 Publication date 2020
  fields
and research's language is English




Ask ChatGPT about the research

In the near future, the neutrinoless double-beta ($0 ubetabeta$) decay experiments will hopefully reach the sensitivity of a few ${rm meV}$ to the effective neutrino mass $|m^{}_{betabeta}|$. In this paper, we tentatively examine the sensitivity of future $0 ubetabeta$-decay experiments to neutrino masses and Majorana CP phases by following the Bayesian statistical approach. Provided experimental setups corresponding to the sensitivity of $|m^{}_{betabeta}| simeq 1~{rm meV}$, the null observation of $0 ubetabeta$ decays in the case of normal neutrino mass ordering leads to a very competitive bound on the lightest neutrino mass $m^{}_1$. Namely, the $95%$ credible interval turns out to be $1.6~{rm meV} lesssim m^{}_1 lesssim 7.3~{rm meV}$ or $0.3~{rm meV} lesssim m^{}_1 lesssim 5.6~{rm meV}$ when the uniform prior on $m^{}_1/{rm eV}$ or on $log^{}_{10}(m^{}_1/{rm eV})$ is adopted. Moreover, one of two Majorana CP phases is strictly constrained, i.e., $140^circ lesssim rho lesssim 220^circ$ for both priors of $m^{}_1$. In contrast, if a relatively worse sensitivity of $|m^{}_{betabeta}| simeq 10~{rm meV}$ is assumed, the constraint becomes accordingly $0.6~{rm meV} lesssim m^{}_1 lesssim 26~{rm meV}$ or $0 lesssim m^{}_1 lesssim 6.1~{rm meV}$, while two Majorana CP phases will be essentially unconstrained. In the same statistical framework, the prospects for the determination of neutrino mass ordering and the discrimination between Majorana and Dirac nature of massive neutrinos in the $0 ubetabeta$-decay experiments are also discussed. Given the experimental sensitivity of $|m^{}_{betabeta}| simeq 10~{rm meV}$ (or $1~{rm meV}$), the strength of evidence to exclude the Majorana nature under the null observation of $0 ubetabeta$ decays is found to be inconclusive (or strong), no matter which of two priors on $m^{}_1$ is taken.



rate research

Read More

Past and current direct neutrino mass experiments set limits on the so-called effective neutrino mass, which is an incoherent sum of neutrino masses and lepton mixing matrix elements. The electron energy spectrum which neglects the relativistic and nuclear recoil effects is often assumed. Alternative definitions of effective masses exist, and an exact relativistic spectrum is calculable. We quantitatively compare the validity of those different approximations as function of energy resolution and exposure in view of tritium beta decays in the KATRIN, Project 8 and PTOLEMY experiments. Furthermore, adopting the Bayesian approach, we present the posterior distributions of the effective neutrino mass by including current experimental information from neutrino oscillations, beta decay, neutrinoless double-beta decay and cosmological observations. Both linear and logarithmic priors for the smallest neutrino mass are assumed.
We propose a new strategy for detecting the CP-violating phases and the effective mass of muon Majorana neutrinos by measuring observables associated with neutrino-antineutrino oscillations in $pi^{pm}$ decays. Within the generic framework of quantum field theory, we compute the non-factorizable probability for producing a pair of same-charged muons in $pi^{pm}$ decays as a distinctive signature of $ u_{mu}-bar{ u_{mu}}$ oscillations. We show that an intense neutrino beam through a long baseline experiment is favored for probing the Majorana phases. Using the neutrino-antineutrino oscillation probability reported by MINOS collaboration, a new stringent bound on the effective muon-neutrino mass is derived.
Bayesian modeling techniques enable sensitivity analyses that incorporate detailed expectations regarding future experiments. A model-based approach also allows one to evaluate inferences and predicted outcomes, by calibrating (or measuring) the consequences incurred when certain results are reported. We present procedures for calibrating predictions of an experiments sensitivity to both continuous and discrete parameters. Using these procedures and a new Bayesian model of the $beta$-decay spectrum, we assess a high-precision $beta$-decay experiments sensitivity to the neutrino mass scale and ordering, for one assumed design scenario. We find that such an experiment could measure the electron-weighted neutrino mass within $sim40,$meV after 1 year (90$%$ credibility). Neutrino masses $>500,$meV could be measured within $approx5,$meV. Using only $beta$-decay and external reactor neutrino data, we find that next-generation $beta$-decay experiments could potentially constrain the mass ordering using a two-neutrino spectral model analysis. By calibrating mass ordering results, we identify reporting criteria that can be tuned to suppress false ordering claims. In some cases, a two-neutrino analysis can reveal that the mass ordering is inverted, an unobtainable result for the traditional one-neutrino analysis approach.
With large active volume sizes dark matter direct detection experiments are sensitive to solar neutrino fluxes. Nuclear recoil signals are induced by $^8$B neutrinos, while electron recoils are mainly generated by the pp flux. Measurements of both processes offer an opportunity to test neutrino properties at low thresholds with fairly low backgrounds. In this paper we study the sensitivity of these experiments to neutrino magnetic dipole moments assuming 1, 10 and 40 tonne active volumes (representative of XENON1T, XENONnT and DARWIN), 0.3 keV and 1 keV thresholds. We show that with nuclear recoil measurements alone a 40 tonne detector could be as competitive as Borexino, TEXONO and GEMMA, with sensitivities of order $8.0times 10^{-11},mu_B$ at the $90%$ CL after one year of data taking. Electron recoil measurements will increase sensitivities way below these values allowing to test regions not excluded by astrophysical arguments. Using electron recoil data and depending on performance, the same detector will be able to explore values down to $4.0times 10^{-12}mu_B$ at the $90%$ CL in one year of data taking. By assuming a 200-tonne liquid xenon detector operating during 10 years, we conclude that sensitivities in this type of detectors will be of order $10^{-12},mu_B$. Reducing statistical uncertainties may enable improving sensitivities below these values.
We consider the possibility of several different mechanisms contributing to the $betabeta$-decay amplitude in the general case of CP nonconservation: light Majorana neutrino exchange, heavy left-handed (LH) and heavy right-handed (RH) Majorana neutrino exchanges, lepton charge non-conserving couplings in SUSY theories with R-parity breaking. If the $betabeta$-decay is induced by, e.g., two non-interfering mechanisms, one can determine $|eta_i|^2$ and $|eta_j|^2$, $eta_i$ and $eta_j$ being the two fundamental parameters characterising these mechanisms, from data on the half-lives of two nuclear isotopes. In the case when two interfering mechanisms are responsible for the $betabeta$-decay, $|eta_i|^2$ and $|eta_j|^2$ and the interference term can be uniquely determined, in principle, from data on the half-lives of three nuclei. Given the half-life of one isotope, the positivity conditions $|eta_i|^2geq 0$ and $|eta_j|^2geq 0$ lead to stringent constraints on the half-lives of the other $betabeta$-decaying isotopes. These conditions, as well as the conditions for constructive (destructive) interference are derived and their implications are analysed in two specific cases. The method considered by us can be generalised to the case of more than two $betabeta$-decay mechanisms. It allows to treat the cases of CP conserving and CP nonconserving couplings generating the $betabeta$-decay in a unique way.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا