No Arabic abstract
With the recent detection of cosmic shear, the most challenging effect of weak gravitational lensing has been observed. The main difficulties for this detection were the need for a large amount of high quality data and the control of systematics during the gravitational shear measurement process, in particular those coming from the Point Spread Function anisotropy. In this paper we perform detailed simulations with the state-of-the-art algorithm developed by Kaiser, Squires and Broadhurst (KSB) to measure gravitational shear. We show that for realistic PSF profiles the KSB algorithm can recover any shear amplitude in the range $0.012 < |gammavec |<0.32$ with a relative, systematic error of $10-15%$. We give quantitative limits on the PSF correction method as a function of shear strength, object size, signal-to-noise and PSF anisotropy amplitude, and we provide an automatic procedure to get a reliable object catalog for shear measurements out of the raw images.
We investigate the accuracy with which the reconnection electric field $E_M$ can be determined from in-situ plasma data. We study the magnetotail electron diffusion region observed by NASAs Magnetospheric Multiscale (MMS) on 2017-07-11 at 22:34 UT and focus on the very large errors in $E_M$ that result from errors in an $LMN$ boundary-normal coordinate system. We determine several $LMN$ coordinates for this MMS event using several different methods. We use these $M$ axes to estimate $E_M$. We find some consensus that the reconnection rate was roughly $E_M$=3.2 mV/m $pm$ 0.06 mV/m, which corresponds to a normalized reconnection rate of $0.18pm0.035$. Minimum variance analysis of the electron velocity (MVA-$v_e$), MVA of $E$, minimization of Faraday residue, and an adjusted version of the maximum directional derivative of the magnetic field (MDD-$B$) technique all produce {reasonably} similar coordinate axes. We use virtual MMS data from a particle-in-cell simulation of this event to estimate the errors in the coordinate axes and reconnection rate associated with MVA-$v_e$ and MDD-$B$. The $L$ and $M$ directions are most reliably determined by MVA-$v_e$ when the spacecraft observes a clear electron jet reversal. When the magnetic field data has errors as small as 0.5% of the background field strength, the $M$ direction obtained by MDD-$B$ technique may be off by as much as 35$^circ$. The normal direction is most accurately obtained by MDD-$B$. Overall, we find that these techniques were able to identify $E_M$ from the virtual data within error bars $geq$20%.
The seesaw mechanism for the small neutrino mass has been a popular paradigm, yet it has been believed that there is no way to test it experimentally. We present a conceivable outcome from future experiments that would convince us of the seesaw mechanism. It would involve a variety of data from LHC, ILC, cosmology, underground, and low-energy flavor violation experiments to establish the case.
Small uncertainties obtained for the Neutron Standards have been associated with possible missing correlations in the input data, with an incomplete uncertainty budget of the employed experimental database or with unrecognized uncertainty sources common to many measurements. While further detailed studies may improve the first two issues, the issue of potential unrecognized uncertainties and correlations between different experiments has long been neglected. We address this gap with a test-case study ons the evaluation of the total neutron multiplicity of the $^{252}$Cf(sf) source, which is included in the evaluation of the Thermal Neutron Constants within the Neutron Standards.
The problem of estimating the effect of missing higher orders in perturbation theory is analyzed with emphasis in the application to Higgs production in gluon-gluon fusion. Well-known mathematical methods for an approximated completion of the perturbative series are applied with the goal to not truncate the series, but complete it in a well-defined way, so as to increase the accuracy - if not the precision - of theoretical predictions. The uncertainty arising from the use of the completion procedure is discussed and a recipe for constructing a corresponding probability distribution function is proposed.
Recent work has presented intriguing results examining the knowledge contained in language models (LM) by having the LM fill in the blanks of prompts such as Obama is a _ by profession. These prompts are usually manually created, and quite possibly sub-optimal; another prompt such as Obama worked as a _ may result in more accurately predicting the correct profession. Because of this, given an inappropriate prompt, we might fail to retrieve facts that the LM does know, and thus any given prompt only provides a lower bound estimate of the knowledge contained in an LM. In this paper, we attempt to more accurately estimate the knowledge contained in LMs by automatically discovering better prompts to use in this querying process. Specifically, we propose mining-based and paraphrasing-based methods to automatically generate high-quality and diverse prompts, as well as ensemble methods to combine answers from different prompts. Extensive experiments on the LAMA benchmark for extracting relational knowledge from LMs demonstrate that our methods can improve accuracy from 31.1% to 39.6%, providing a tighter lower bound on what LMs know. We have released the code and the resulting LM Prompt And Query Archive (LPAQA) at https://github.com/jzbjyb/LPAQA.