Do you want to publish a course? Click here

Improved Lower Bounds on Partial Lifetimes for Nucleon Decay Modes

58   0   0.0 ( 0 )
 Added by Robert Shrock
 Publication date 2019
  fields
and research's language is English




Ask ChatGPT about the research

In the framework of a baryon-number-violating effective Lagrangian, we calculate improved lower bounds on partial lifetimes for proton and bound neutron decays, including $p to ell^+ ell^+ ell^-$, $n to bar u ell^+ ell^-$, $p to ell^+ ubar u$, and $n to bar u bar u u$, where $ell$ and $ell$ denote $e$ or $mu$, with both $ell = ell$ and $ell e ell$ cases. Our lower bounds are substantially stronger than the corresponding lower bounds from direct experimental searches. We also present lower bounds on $(tau/B)_{p to ell^+gamma}$, $(tau/B)_{n to bar u gamma}$, $(tau/B)_{p to ell^+ gammagamma}$, and $(tau/B)_{n to bar u gammagamma}$. Our method relies on relating the rates for these decay modes to the rates for decay modes of the form $p to ell^+ M$ and $n to bar u M$, where $M$ is a pseudoscalar or vector meson, and then using the experimental lower bounds on the partial lifetimes for these latter decays.



rate research

Read More

The problem of scheduling unrelated machines by a truthful mechanism to minimize the makespan was introduced in the seminal Algorithmic Mechanism Design paper by Nisan and Ronen. Nisan and Ronen showed that there is a truthful mechanism that provides an approximation ratio of $min(m,n)$, where $n$ is the number of machines and $m$ is the number of jobs. They also proved that no truthful mechanism can provide an approximation ratio better than $2$. Since then, the lower bound was improved to $1 +sqrt 2 approx 2.41$ by Christodoulou, Kotsoupias, and Vidali, and then to $1+phiapprox 2.618$ by Kotsoupias and Vidali. Very recently, the lower bound was improved to $2.755$ by Giannakopoulos, Hammerl, and Pocas. In this paper we further improve the bound to $3-delta$, for every constant $delta>0$. Note that a gap between the upper bound and the lower bounds exists even when the number of machines and jobs is very small. In particular, the known $1+sqrt{2}$ lower bound requires at least $3$ machines and $5$ jobs. In contrast, we show a lower bound of $2.2055$ that uses only $3$ machines and $3$ jobs and a lower bound of $1+sqrt 2$ that uses only $3$ machines and $4$ jobs. For the case of two machines and two jobs we show a lower bound of $2$. Similar bounds for two machines and two jobs were known before but only via complex proofs that characterized all truthful mechanisms that provide a finite approximation ratio in this setting, whereas our new proof uses a simple and direct approach.
98 - Ray Li , Mary Wootters 2021
Batch codes are a useful notion of locality for error correcting codes, originally introduced in the context of distributed storage and cryptography. Many constructions of batch codes have been given, but few lower bound (limitation) results are known, leaving gaps between the best known constructions and best known lower bounds. Towards determining the optimal redundancy of batch codes, we prove a new lower bound on the redundancy of batch codes. Specifically, we study (primitive, multiset) linear batch codes that systematically encode $n$ information symbols into $N$ codeword symbols, with the requirement that any multiset of $k$ symbol requests can be obtained in disjoint ways. We show that such batch codes need $Omega(sqrt{Nk})$ symbols of redundancy, improving on the previous best lower bounds of $Omega(sqrt{N}+k)$ at all $k=n^varepsilon$ with $varepsilonin(0,1)$. Our proof follows from analyzing the dimension of the order-$O(k)$ tensor of the batch codes dual code.
Feebly Interacting Massive Particles (FIMPs) are dark matter candidates that never thermalize in the early universe and whose production takes place via decays and/or scatterings of thermal bath particles. If FIMPs interactions with the thermal bath are renormalizable, a scenario which is known as freeze-in, production is most efficient at temperatures around the mass of the bath particles and insensitive to unknown physics at high temperatures. Working in a model-independent fashion, we consider three different production mechanisms: two-body decays, three-body decays, and binary collisions. We compute the FIMP phase space distribution and matter power spectrum, and we investigate the suppression of cosmological structures at small scales. Our results are lower bounds on the FIMP mass. Finally, we study how to relax these constraints in scenarios where FIMPs provide a sub-dominant dark matter component.
56 - Artur M. Ankowski 2016
The hypothesis of the conserved vector current, relating the vector weak and isovector electromagnetic currents, plays a fundamental role in quantitative description of neutrino interactions. Despite being experimentally confirmed with great precision, it is not fully implemented in existing calculations of the cross section for inverse beta decay, the dominant mechanism of antineutrino scattering at energies below a few tens of MeV. In this article, I estimate the corresponding cross section and its uncertainty, ensuring conservation of the vector current. While converging to previous calculations at energies of several MeV, the obtained result is appreciably lower and predicts more directional positron production near the reaction threshold. These findings suggest that in the current estimate of the flux of geologically produced antineutrinos the 232Th and 238U components may be underestimated by 6.1 and 3.7%, respectively. The proposed search for light sterile neutrinos using a 144Ce--144Pr source is predicted to collect the total event rate lower by 3% than previously estimated and to observe a spectral distortion that could be misinterpreted as an oscillation signal. In reactor-antineutrino experiments, together with a re-evaluation of the positron spectra, the predicted event rate should be reduced by 0.9%, diminishing the size of the reported anomaly.
It was conjectured by v{C}erny in 1964, that a synchronizing DFA on $n$ states always has a synchronizing word of length at most $(n-1)^2$, and he gave a sequence of DFAs for which this bound is reached. Until now a full analysis of all DFAs reaching this bound was only given for $n leq 5$, and with bounds on the number of symbols for $n leq 12$. Here we give the full analysis for $n leq 7$, without bounds on the number of symbols. For PFAs (partial automata) on $leq 7$ states we do a similar analysis as for DFAs and find the maximal shortest synchronizing word lengths, exceeding $(n-1)^2$ for $n geq 4$. Where DFAs with long synchronization typically have very few symbols, for PFAs we observe that more symbols may increase the synchronizing word length. For PFAs on $leq 10$ states and two symbols we investigate all occurring synchronizing word lengths. We give series of PFAs on two and three symbols, reaching the maximal possible length for some small values of $n$. For $n=6,7,8,9$, the construction on two symbols is the unique one reaching the maximal length. For both series the growth is faster than $(n-1)^2$, although still quadratic. Based on string rewriting, for arbitrary size we construct a PFA on three symbols with exponential shortest synchronizing word length, giving significantly better bounds than earlier exponential constructions. We give a transformation of this PFA to a PFA on two symbols keeping exponential shortest synchronizing word length, yielding a better bound than applying a similar known transformation. Both PFAs are transitive. Finally, we show that exponential lengths are even possible with just one single undefined transition, again with transitive constructions.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا