ترغب بنشر مسار تعليمي؟ اضغط هنا

This paper presents the results of the Rubin Observatory Dark Energy Science Collaboration (DESC) 3x2pt tomography challenge, which served as a first step toward optimizing the tomographic binning strategy for the main DESC analysis. The task of choo sing an optimal tomographic binning scheme for a photometric survey is made particularly delicate in the context of a metacalibrated lensing catalogue, as only the photometry from the bands included in the metacalibration process (usually riz and potentially g) can be used in sample definition. The goal of the challenge was to collect and compare bin assignment strategies under various metrics of a standard 3x2pt cosmology analysis in a highly idealized setting to establish a baseline for realistically complex follow-up studies; in this preliminary study, we used two sets of cosmological simulations of galaxy redshifts and photometry under a simple noise model neglecting photometric outliers and variation in observing conditions, and contributed algorithms were provided with a representative and complete training set. We review and evaluate the entries to the challenge, finding that even from this limited photometry information, multiple algorithms can separate tomographic bins reasonably well, reaching figures-of-merit scores close to the attainable maximum. We further find that adding the g band to riz photometry improves metric performance by ~15% and that the optimal bin assignment strategy depends strongly on the science case: which figure-of-merit is to be optimized, and which observables (clustering, lensing, or both) are included.
Reconstructing the Gaussian initial conditions at the beginning of the Universe from the survey data in a forward modeling framework is a major challenge in cosmology. This requires solving a high dimensional inverse problem with an expensive, non-li near forward model: a cosmological N-body simulation. While intractable until recently, we propose to solve this inference problem using an automatically differentiable N-body solver, combined with a recurrent networks to learn the inference scheme and obtain the maximum-a-posteriori (MAP) estimate of the initial conditions of the Universe. We demonstrate using realistic cosmological observables that learnt inference is 40 times faster than traditional algorithms such as ADAM and LBFGS, which require specialized annealing schemes, and obtains solution of higher quality.
The observing strategy of a galaxy survey influences the degree to which its resulting data can be used to accomplish any science goal. LSST is thus seeking metrics of observing strategies for multiple science cases in order to optimally choose a cad ence. Photometric redshifts are essential for many extragalactic science applications of LSSTs data, including but not limited to cosmology, but there are few metrics available, and they are not straightforwardly integrated with metrics of other cadence-dependent quantities that may influence any given use case. We propose a metric for observing strategy optimization based on the potentially recoverable mutual information about redshift from a photometric sample under the constraints of a realistic observing strategy. We demonstrate a tractable estimation of a variational lower bound of this mutual information implemented in a public code using conditional normalizing flows. By comparing the recoverable redshift information across observing strategies, we can distinguish between those that preclude robust redshift constraints and those whose data will preserve more redshift information, to be generically utilized in a downstream analysis. We recommend the use of this versatile metric to observing strategy optimization for redshift-dependent extragalactic use cases, including but not limited to cosmology, as well as any other science applications for which photometry may be modeled from true parameter values beyond redshift.
In many cosmological inference problems, the likelihood (the probability of the observed data as a function of the unknown parameters) is unknown or intractable. This necessitates approximations and assumptions, which can lead to incorrect inference of cosmological parameters, including the nature of dark matter and dark energy, or create artificial model tensions. Likelihood-free inference covers a novel family of methods to rigorously estimate posterior distributions of parameters using forward modelling of mock data. We present likelihood-free cosmological parameter inference using weak lensing maps from the Dark Energy Survey (DES) SV data, using neural data compression of weak lensing map summary statistics. We explore combinations of the power spectra, peak counts, and neural compressed summaries of the lensing mass map using deep convolution neural networks. We demonstrate methods to validate the inference process, for both the data modelling and the probability density estimation steps. Likelihood-free inference provides a robust and scalable alternative for rigorous large-scale cosmological inference with galaxy survey data (for DES, Euclid and LSST). We have made our simulated lensing maps publicly available.
In recent times, neural networks have become a powerful tool for the analysis of complex and abstract data models. However, their introduction intrinsically increases our uncertainty about which features of the analysis are model-related and which ar e due to the neural network. This means that predictions by neural networks have biases which cannot be trivially distinguished from being due to the true nature of the creation and observation of data or not. In order to attempt to address such issues we discuss Bayesian neural networks: neural networks where the uncertainty due to the network can be characterised. In particular, we present the Bayesian statistical framework which allows us to categorise uncertainty in terms of the ingrained randomness of observing certain data and the uncertainty from our lack of knowledge about how data can be created and observed. In presenting such techniques we show how errors in prediction by neural networks can be obtained in principle, and provide the two favoured methods for characterising these errors. We will also describe how both of these methods have substantial pitfalls when put into practice, highlighting the need for other statistical techniques to truly be able to do inference when using neural networks.
We present results from a set of simulations designed to constrain the weak lensing shear calibration for the Hyper Suprime-Cam (HSC) survey. These simulations include HSC observing conditions and galaxy images from the Hubble Space Telescope (HST), with fully realistic galaxy morphologies and the impact of nearby galaxies included. We find that the inclusion of nearby galaxies in the images is critical to reproducing the observed distributions of galaxy sizes and magnitudes, due to the non-negligible fraction of unrecognized blends in ground-based data, even with the excellent typical seeing of the HSC survey (0.58 in the $i$-band). Using these simulations, we detect and remove the impact of selection biases due to the correlation of weights and the quantities used to define the sample (S/N and apparent size) with the lensing shear. We quantify and remove galaxy property-dependent multiplicative and additive shear biases that are intrinsic to our shear estimation method, including a $sim 10$ per cent-level multiplicative bias due to the impact of nearby galaxies and unrecognized blends. Finally, we check the sensitivity of our shear calibration estimates to other cuts made on the simulated samples, and find that the changes in shear calibration are well within the requirements for HSC weak lensing analysis. Overall, the simulations suggest that the weak lensing multiplicative biases in the first-year HSC shear catalog are controlled at the 1 per cent level.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا