Do you want to publish a course? Click here

Building Unbiased Estimators from Non-Gaussian Likelihoods with Application to Shear Estimation

122   0   0.0 ( 0 )
 Added by Anze Slosar
 Publication date 2014
  fields Physics
and research's language is English




Ask ChatGPT about the research

We develop a general framework for generating estimators of a given quantity which are unbiased to a given order in the difference between the true value of the underlying quantity and the fiducial position in theory space around which we expand the likelihood. We apply this formalism to rederive the optimal quadratic estimator and show how the replacement of the second derivative matrix with the Fisher matrix is a generic way of creating an unbiased estimator (assuming choice of the fiducial model is independent of data). Next apply the approach to estimation of shear lensing, closely following the work of Bernstein and Armstrong (2014). Our first order estimator reduces to their estimator in the limit of zero shear, but it also naturally allows for the case of non-constant shear and the easy calculation of correlation functions or power spectra using standard methods. Both our first-order estimator and Bernstein and Armstrongs estimator exhibit a bias which is quadratic in true shear. Our third-order estimator is, at least in the realm of the toy problem of Bernstein and Armstrong, unbiased to 0.1% in relative shear errors $Delta g/|g|$ for shears up to $|g|=0.2$.



rate research

Read More

We investigate the use of data-driven likelihoods to bypass a key assumption made in many scientific analyses, which is that the true likelihood of the data is Gaussian. In particular, we suggest using the optimization targets of flow-based generative models, a class of models that can capture complex distributions by transforming a simple base distribution through layers of nonlinearities. We call these flow-based likelihoods (FBL). We analyze the accuracy and precision of the reconstructed likelihoods on mock Gaussian data, and show that simply gauging the quality of samples drawn from the trained model is not a sufficient indicator that the true likelihood has been learned. We nevertheless demonstrate that the likelihood can be reconstructed to a precision equal to that of sampling error due to a finite sample size. We then apply FBLs to mock weak lensing convergence power spectra, a cosmological observable that is significantly non-Gaussian (NG). We find that the FBL captures the NG signatures in the data extremely well, while other commonly used data-driven likelihoods, such as Gaussian mixture models and independent component analysis, fail to do so. This suggests that works that have found small posterior shifts in NG data with data-driven likelihoods such as these could be underestimating the impact of non-Gaussianity in parameter constraints. By introducing a suite of tests that can capture different levels of NG in the data, we show that the success or failure of traditional data-driven likelihoods can be tied back to the structure of the NG in the data. Unlike other methods, the flexibility of the FBL makes it successful at tackling different types of NG simultaneously. Because of this, and consequently their likely applicability across datasets and domains, we encourage their use for inference when sufficient mock data are available for training.
The galaxy catalogs generated from low-resolution emission line surveys often contain both foreground and background interlopers due to line misidentification, which can bias the cosmological parameter estimation. In this paper, we present a method for correcting the interloper bias by using the joint-analysis of auto- and cross-power spectra of the main and the interloper samples. In particular, we can measure the interloper fractions from the cross-correlation between the interlopers and survey galaxies, because the true cross-correlation must be negligibly small. The estimated interloper fractions, in turn, remove the interloper bias in the cosmological parameter estimation. For example, in the Hobby-Eberly Telescope Dark Energy Experiment (HETDEX) low-redshift ($z<0.5$) [O II] $lambda3727${AA} emitters contaminate high-redshift ($1.9<z<3.5$) Lyman-$alpha$ line emitters. We demonstrate that the joint-analysis method yields a high signal-to-noise ratio measurement of the interloper fractions while only marginally increasing the uncertainties in the cosmological parameters relative to the case without interlopers. We also show the same is true for the high-latitude spectroscopic survey of Wide-Field Infrared Survey Telescope (WFIRST) mission where contamination occurs between the Balmer-$alpha$ line emitters at lower redshifts ($1.1<z<1.9$) and Oxygen ([O III] $lambda5007${AA}) line emitters at higher redshifts ($1.7<z<2.8$).
61 - Zhiyun Lu , Eugene Ie , Fei Sha 2020
Many methods have been proposed to quantify the predictive uncertainty associated with the outputs of deep neural networks. Among them, ensemble methods often lead to state-of-the-art results, though they require modifications to the training procedures and are computationally costly for both training and inference. In this paper, we propose a new single-model based approach. The main idea is inspired by the observation that we can simulate an ensemble of models by drawing from a Gaussian distribution, with a form similar to those from the asymptotic normality theory, infinitesimal Jackknife, Laplacian approximation to Bayesian neural networks, and trajectories in stochastic gradient descents. However, instead of using each model in the ensemble to predict and then aggregating their predictions, we integrate the Gaussian distribution and the softmax outputs of the neural networks. We use a mean-field approximation formula to compute this analytically intractable integral. The proposed approach has several appealing properties: it functions as an ensemble without requiring multiple models, and it enables closed-form approximate inference using only the first and second moments of the Gaussian. Empirically, the proposed approach performs competitively when compared to state-of-the-art methods, including deep ensembles, temperature scaling, dropout and Bayesian NNs, on standard uncertainty estimation tasks. It also outperforms many methods on out-of-distribution detection.
The maximum mean discrepancy (MMD) is a kernel-based distance between probability distributions useful in many applications (Gretton et al. 2012), bearing a simple estimator with pleasing computational and statistical properties. Being able to efficiently estimate the variance of this estimator is very helpful to various problems in two-sample testing. Towards this end, Bounliphone et al. (2016) used the theory of U-statistics to derive estimators for the variance of an MMD estimator, and differences between two such estimators. Their estimator, however, drops lower-order terms, and is unnecessarily biased. We show in this note - extending and correcting work of Sutherland et al. (2017) - that we can find a truly unbiased estimator for the actual variance of both the squared MMD estimator and the difference of two correlated squared MMD estimators, at essentially no additional computational cost.
91 - Richard Watkins 2014
We introduce a new estimator of the peculiar velocity of a galaxy or group of galaxies from redshift and distance estimates. This estimator results in peculiar velocity estimates which are statistically unbiased and that have errors that are Gaussian distributed, thus meeting the assumptions of analyses that rely on individual peculiar velocities. We apply this estimator to the SFI++ and the Cosmicflows-2 catalogs of galaxy distances and, using the fact that peculiar velocity estimates of distant galaxies are error dominated, examine their error distributions, The adoption of the new estimator significantly improves the accuracy and validity of studies of the large-scale peculiar velocity field and eliminates potential systematic biases, thus helping to bring peculiar velocity analysis into the era of precision cosmology. In addition, our method of examining the distribution of velocity errors should provide a useful check of the statistics of large peculiar velocity catalogs, particularly those that are compiled out of data from multiple sources.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا