In a certain type of Calabi-Yau superstring models it is clarified that the symmetry breaking occurs by stages at two large intermediate energy scales and that two large intermediate scales induce large Majorana-masses of right-handed neutrinos. Peculiar structure of the effective nonrenormalizable interactions is crucial in the models. In this scheme Majorana-masses possibly amount to $O(10^{9 sim 10}gev)$ and see-saw mechanism is at work for neutrinos. Based on this scheme we propose a viable model which explains the smallness of masses for three kind of neutrinos $ u _e, u _{mu} {rm and} u _{tau}$. Special forms of the nonrenormalizable interactions can be understood as a consequence of an appropriate discrete symmetry of the compactified manifold.
The seesaw mechanism for the small neutrino mass has been a popular paradigm, yet it has been believed that there is no way to test it experimentally. We present a conceivable outcome from future experiments that would convince us of the seesaw mechanism. It would involve a variety of data from LHC, ILC, cosmology, underground, and low-energy flavor violation experiments to establish the case.
The problem of estimating the effect of missing higher orders in perturbation theory is analyzed with emphasis in the application to Higgs production in gluon-gluon fusion. Well-known mathematical methods for an approximated completion of the perturbative series are applied with the goal to not truncate the series, but complete it in a well-defined way, so as to increase the accuracy - if not the precision - of theoretical predictions. The uncertainty arising from the use of the completion procedure is discussed and a recipe for constructing a corresponding probability distribution function is proposed.
In this paper, we consider the problem of learning models with a latent factor structure. The focus is to find what is possible and what is impossible if the usual strong factor condition is not imposed. We study the minimax rate and adaptivity issues in two problems: pure factor models and panel regression with interactive fixed effects. For pure factor models, if the number of factors is known, we develop adaptive estimation and inference procedures that attain the minimax rate. However, when the number of factors is not specified a priori, we show that there is a tradeoff between validity and efficiency: any confidence interval that has uniform validity for arbitrary factor strength has to be conservative; in particular its width is bounded away from zero even when the factors are strong. Conversely, any data-driven confidence interval that does not require as an input the exact number of factors (including weak ones) and has shrinking width under strong factors does not have uniform coverage and the worst-case coverage probability is at most 1/2. For panel regressions with interactive fixed effects, the tradeoff is much better. We find that the minimax rate for learning the regression coefficient does not depend on the factor strength and propose a simple estimator that achieves this rate. However, when weak factors are allowed, uncertainty in the number of factors can cause a great loss of efficiency although the rate is not affected. In most cases, we find that the strong factor condition (and/or exact knowledge of number of factors) improves efficiency, but this condition needs to be imposed by faith and cannot be verified in data for inference purposes.
Recent work has presented intriguing results examining the knowledge contained in language models (LM) by having the LM fill in the blanks of prompts such as Obama is a _ by profession. These prompts are usually manually created, and quite possibly sub-optimal; another prompt such as Obama worked as a _ may result in more accurately predicting the correct profession. Because of this, given an inappropriate prompt, we might fail to retrieve facts that the LM does know, and thus any given prompt only provides a lower bound estimate of the knowledge contained in an LM. In this paper, we attempt to more accurately estimate the knowledge contained in LMs by automatically discovering better prompts to use in this querying process. Specifically, we propose mining-based and paraphrasing-based methods to automatically generate high-quality and diverse prompts, as well as ensemble methods to combine answers from different prompts. Extensive experiments on the LAMA benchmark for extracting relational knowledge from LMs demonstrate that our methods can improve accuracy from 31.1% to 39.6%, providing a tighter lower bound on what LMs know. We have released the code and the resulting LM Prompt And Query Archive (LPAQA) at https://github.com/jzbjyb/LPAQA.
We study the geometry of the scalar manifolds emerging in the no-scale sector of Kahler moduli and matter fields in generic Calabi-Yau string compactifications, and describe its implications on scalar masses. We consider both heterotic and orientifold models and compare their characteristics. We start from a general formula for the Kahler potential as a function of the topological compactification data and study the structure of the curvature tensor. We then determine the conditions for the space to be symmetric and show that whenever this is the case the heterotic and the orientifold models give the same scalar manifold. We finally study the structure of scalar masses in this type of geometries, assuming that a generic superpotential triggers spontaneous supersymmetry breaking. We show in particular that their behavior crucially depends on the parameters controlling the departure of the geometry from the coset situation. We first investigate the average sGoldstino mass in the hidden sector and its sign, and study the implications on vacuum metastability and the mass of the lightest scalar. We next examine the soft scalar masses in the visible sector and their flavor structure, and study the possibility of realizing a mild form of sequestering relying on a global symmetry.