Do you want to publish a course? Click here

How can we test seesaw experimentally?

81   0   0.0 ( 0 )
 Added by Matthew Buckley
 Publication date 2006
  fields
and research's language is English




Ask ChatGPT about the research

The seesaw mechanism for the small neutrino mass has been a popular paradigm, yet it has been believed that there is no way to test it experimentally. We present a conceivable outcome from future experiments that would convince us of the seesaw mechanism. It would involve a variety of data from LHC, ILC, cosmology, underground, and low-energy flavor violation experiments to establish the case.

rate research

Read More

The problem of estimating the effect of missing higher orders in perturbation theory is analyzed with emphasis in the application to Higgs production in gluon-gluon fusion. Well-known mathematical methods for an approximated completion of the perturbative series are applied with the goal to not truncate the series, but complete it in a well-defined way, so as to increase the accuracy - if not the precision - of theoretical predictions. The uncertainty arising from the use of the completion procedure is discussed and a recipe for constructing a corresponding probability distribution function is proposed.
How far can we use multi-wavelength cross-identifications to deconvolve far-infrared images? In this short research note I explore a test case of CLEAN deconvolutions of simulated confused 850 micron SCUBA-2 data, and explore the possible scientific applications of combining this data with ostensibly deeper TolTEC Large Scale Structure (LSS) survey 1.1mm-2mm data. I show that the SCUBA-2 can be reconstructed to the 1.1mm LMT resolution and achieve an 850 micron deconvolved sensitivity of 0.7 mJy RMS, an improvement of at least ~1:5x over naive point source filtered images. The TolTEC/SCUBA-2 combination can constrain cold (<10K) observed-frame colour temperatures, where TolTEC alone cannot.
63 - N.Haba , C.Hattori , M.Matsuda 1994
In a certain type of Calabi-Yau superstring models it is clarified that the symmetry breaking occurs by stages at two large intermediate energy scales and that two large intermediate scales induce large Majorana-masses of right-handed neutrinos. Peculiar structure of the effective nonrenormalizable interactions is crucial in the models. In this scheme Majorana-masses possibly amount to $O(10^{9 sim 10}gev)$ and see-saw mechanism is at work for neutrinos. Based on this scheme we propose a viable model which explains the smallness of masses for three kind of neutrinos $ u _e, u _{mu} {rm and} u _{tau}$. Special forms of the nonrenormalizable interactions can be understood as a consequence of an appropriate discrete symmetry of the compactified manifold.
Using the perturbative QCD amplitudes for $Bto pipi$ and $Bto Kpi$, we have performed an extensive study of the parameter space where the theoretical predictions for the branching ratios are consistent with recent experimental data. From this allowed range of parameter space, we predict the mixing induced CP asymmetry for $B to pi^+pi^-$ with about 11% uncertainty and the other CP asymmetries for $Bto pipi$, $Kpi$ with 40% ~ 47% uncertainty. These errors are expected to be reduced as we restrict the parameter space by studying other decay modes and by further improvements in the experimental data.
Recent work has presented intriguing results examining the knowledge contained in language models (LM) by having the LM fill in the blanks of prompts such as Obama is a _ by profession. These prompts are usually manually created, and quite possibly sub-optimal; another prompt such as Obama worked as a _ may result in more accurately predicting the correct profession. Because of this, given an inappropriate prompt, we might fail to retrieve facts that the LM does know, and thus any given prompt only provides a lower bound estimate of the knowledge contained in an LM. In this paper, we attempt to more accurately estimate the knowledge contained in LMs by automatically discovering better prompts to use in this querying process. Specifically, we propose mining-based and paraphrasing-based methods to automatically generate high-quality and diverse prompts, as well as ensemble methods to combine answers from different prompts. Extensive experiments on the LAMA benchmark for extracting relational knowledge from LMs demonstrate that our methods can improve accuracy from 31.1% to 39.6%, providing a tighter lower bound on what LMs know. We have released the code and the resulting LM Prompt And Query Archive (LPAQA) at https://github.com/jzbjyb/LPAQA.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا