ترغب بنشر مسار تعليمي؟ اضغط هنا

Strong-lensing source reconstruction with variationally optimised Gaussian processes

70   0   0.0 ( 0 )
 نشر من قبل Konstantin Karchev
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Strong-lensing images provide a wealth of information both about the magnified source and about the dark matter distribution in the lens. Precision analyses of these images can be used to constrain the nature of dark matter. However, this requires high-fidelity image reconstructions and careful treatment of the uncertainties of both lens mass distribution and source light, which are typically difficult to quantify. In anticipation of future high-resolution datasets, in this work we leverage a range of recent developments in machine learning to develop a new Bayesian strong-lensing image analysis pipeline. Its highlights are: (A) a fast, GPU-enabled, end-to-end differentiable strong-lensing image simulator; (B) a new, statistically principled source model based on a computationally highly efficient approximation to Gaussian processes that also takes into account pixellation; and (C) a scalable variational inference framework that enables simultaneously deriving posteriors for tens of thousands of lens and source parameters and optimising hyperparameters via stochastic gradient descent. Besides efficient and accurate parameter estimation and lens model uncertainty quantification, the main aim of the pipeline is the generation of training data for targeted simulation-based inference of dark matter substructure, which we will exploit in a companion paper.



قيم البحث

اقرأ أيضاً

Gaussian process (GP) models form a core part of probabilistic machine learning. Considerable research effort has been made into attacking three issues with GP models: how to compute efficiently when the number of data is large; how to approximate th e posterior when the likelihood is not Gaussian and how to estimate covariance function parameter posteriors. This paper simultaneously addresses these, using a variational approximation to the posterior which is sparse in support of the function but otherwise free-form. The result is a Hybrid Monte-Carlo sampling scheme which allows for a non-Gaussian approximation over the function values and covariance parameters simultaneously, with efficient computations based on inducing-point sparse GPs. Code to replicate each experiment in this paper will be available shortly.
Strong gravitational lensing, which can make a background source galaxy appears multiple times due to its light rays being deflected by the mass of one or more foreground lens galaxies, provides astronomers with a powerful tool to study dark matter, cosmology and the most distant Universe. PyAutoLens is an open-source Python 3.6+ package for strong gravitational lensing, with core features including fully automated strong lens modeling of galaxies and galaxy clusters, support for direct imaging and interferometer datasets and comprehensive tools for simulating samples of strong lenses. The API allows users to perform ray-tracing by using analytic light and mass profiles to build strong lens systems. Accompanying PyAutoLens is the autolens workspace (see https://github.com/Jammy2211/autolens_workspace), which includes example scripts, lens datasets and the HowToLens lectures in Jupyter notebook format which introduce non experts to strong lensing using PyAutoLens. Readers can try PyAutoLens right now by going to the introduction Jupyter notebook on Binder (see https://mybinder.org/v2/gh/Jammy2211/autolens_workspace/master) or checkout the readthedocs (see https://pyautolens.readthedocs.io/en/latest/) for a complete overview of PyAutoLenss features.
Large multi-object spectroscopic surveys require automated algorithms to optimise their observing strategy. One of the most ambitious upcoming spectroscopic surveys is the 4MOST survey. The 4MOST survey facility is a fibre-fed spectroscopic instrumen t on the VISTA telescope with a large enough field of view to survey a large fraction of the southern sky within a few years. Several Galactic and extragalactic surveys will be carried out simultaneously, so the combined target density will strongly vary. In this paper, we describe a new tiling algorithm that can naturally deal with the large target density variations on the sky and which automatically handles the different exposure times of targets. The tiling pattern is modelled as a marked point process, which is characterised by a probability density that integrates the requirements imposed by the 4MOST survey. The optimal tilling pattern with respect to the defined model is estimated by the tiles configuration that maximises the proposed probability density. In order to achieve this maximisation a simulated annealing algorithm is implemented. The algorithm automatically finds an optimal tiling pattern and assigns a tentative sky brightness condition and exposure time for each tile, while minimising the total execution time that is needed to observe the list of targets in the combined input catalogue of all surveys. Hence, the algorithm maximises the long-term observing efficiency and provides an optimal tiling solution for the survey. While designed for the 4MOST survey, the algorithm is flexible and can with simple modifications be applied to any other multi-object spectroscopic survey.
In this paper we develop a new unsupervised machine learning technique comprised of a feature extractor, a convolutional autoencoder (CAE), and a clustering algorithm consisting of a Bayesian Gaussian mixture model (BGM). We apply this technique to v isual band space-based simulated imaging data from the Euclid Space Telescope using data from the Strong Gravitational Lenses Finding Challenge. Our technique promisingly captures a variety of lensing features such as Einstein rings with different radii, distorted arc structures, etc, without using predefined labels. After the clustering process, we obtain several classification clusters separated by different visual features which are seen in the images. Our method successfully picks up $sim$63 percent of lensing images from all lenses in the training set. With the assumed probability proposed in this study, this technique reaches an accuracy of $77.25pm 0.48$% in binary classification using the training set. Additionally, our unsupervised clustering process can be used as the preliminary classification for future surveys of lenses to efficiently select targets and to speed up the labelling process. As the starting point of the astronomical application using this technique, we not only explore the application to gravitationally lensed systems, but also discuss the limitations and potential future uses of this technique.
We describe a new open source package for calculating properties of galaxy clusters, including NFW halo profiles with and without the effects of cluster miscentering. This pure-Python package, cluster-lensing, provides well-documented and easy-to-use classes and functions for calculating cluster scaling relations, including mass-richness and mass-concentration relations from the literature, as well as the surface mass density $Sigma(R)$ and differential surface mass density $DeltaSigma(R)$ profiles, probed by weak lensing magnification and shear. Galaxy cluster miscentering is especially a concern for stacked weak lensing shear studies of galaxy clusters, where offsets between the assumed and the true underlying matter distribution can lead to a significant bias in the mass estimates if not accounted for. This software has been developed and released in a public GitHub repository, and is licensed under the permissive MIT license. The cluster-lensing package is archived on Zenodo (Ford 2016). Full documentation, source code, and installation instructions are available at http://jesford.github.io/cluster-lensing/.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا