No Arabic abstract
We develop a novel statistical strong lensing approach to probe the cosmological parameters by exploiting multiple redshift image systems behind galaxies or galaxy clusters. The method relies on free-form mass inversion of strong lenses and does not need any additional information other than gravitational lensing. Since in free-form lensing the solution space is a high-dimensional convex polytope, we consider Bayesian model comparison analysis to infer the cosmological parameters. The volume of the solution space is taken as a tracer of the probability of the underlying cosmological assumption. In contrast to parametric mass
I propose a new approach to free-form cluster lens modeling that is inspired by the JPEG image compression method. This approach is motivated specifically by the need for accurate modeling of high-magnification regions in galaxy clusters. Existing modeling methods may struggle in these regions due to their limited flexibility in the parametrization of the lens, even for a wide variety of free-form methods. This limitation especially hinders the characterization of faint galaxies at high redshifts, which have important implications for the formation of the first galaxies and even for the nature of dark matter. JPEG images are extremely accurate representations of their original, uncompressed counterparts but use only a fraction of number of parameters to represent that information. Its relevance is immediately obvious to cluster lens modeling. Using this technique, it is possible to construct flexible models that are capable of accurately reproducing the true mass distribution using only a small number of free parameters. Transferring this well-proven technology to cluster lens modeling, I demonstrate that this `JPEG parametrization is indeed flexible enough to accurately approximate an N-body simulated cluster.
We develop a machine learning model to detect dark substructure (subhalos) within simulated images of strongly lensed galaxies. Using the technique of image segmentation, we turn the task of identifying subhalos into a classification problem where we label each pixel in an image as coming from the main lens, a subhalo within a binned mass range, or neither. Our network is only trained on images with a single smooth lens and either zero or one subhalo near the Einstein ring. On a test set of noiseless simulated images with a single subhalo, the network is able to locate subhalos with a mass of $10^{8} M_{odot}$ and place them in the correct or adjacent mass bin, effectively detecting them 97% of the time. For this test set, the network detects subhalos down to masses of $10^{6} M_{odot}$ at 61% accuracy. However, noise limits the sensitivity to light subhalo masses. With 1% noise (with this level of noise, the distribution of signal-to-noise in the image pixels approximates that of images from the Hubble Space Telescope for sources with magnitude $< 20$), a subhalo with mass $10^{8.5}M_{odot}$ is detected 86% of the time, while subhalos with masses of $10^{8}M_{odot}$ are only detected 38% of the time. Furthermore, the model is able to generalize to new contexts it has not been trained on, such as locating multiple subhalos with varying masses, subhalos far from the Einstein ring, or more than one large smooth lens.
Strong gravitational lensing of sources with different redshifts has been used to determine cosmological distance ratios, which in turn depend on the expansion history. Hence, such systems are viewed as potential tools for constraining cosmological parameters. Here we show that in lens systems with two distinct source redshifts, of which the nearest one contributes to the light deflection towards the more distant one, there exists an invariance transformation which leaves all strong lensing observables unchanged (except the product of time delay and Hubble constant), generalizing the well-known mass-sheet transformation in single plane lens systems. The transformation preserves the relative distribution of mass and light, so that a `mass-follows-light assumption does not fix the MST. All time delays (from sources on both planes) scale with the same factor -- time-delay ratios are therefore invariant under the MST. Changing cosmological parameters, and thus distance ratios, is essentially equivalent to such a mass-sheet transformation. As an example, we discuss the double source plane system SDSSJ0946+1006, which has been recently studied by Collett and Auger, and show that variations of cosmological parameters within reasonable ranges lead to only a small mass-sheet transformation in both lens planes. Hence, the ability to extract cosmological information from such systems depends heavily on the ability to break the mass-sheet degeneracy.
Large scale imaging surveys will increase the number of galaxy-scale strong lensing candidates by maybe three orders of magnitudes beyond the number known today. Finding these rare objects will require picking them out of at least tens of millions of images and deriving scientific results from them will require quantifying the efficiency and bias of any search method. To achieve these objectives automated methods must be developed. Because gravitational lenses are rare objects reducing false positives will be particularly important. We present a description and results of an open gravitational lens finding challenge. Participants were asked to classify 100,000 candidate objects as to whether they were gravitational lenses or not with the goal of developing better automated methods for finding lenses in large data sets. A variety of methods were used including visual inspection, arc and ring finders, support vector machines (SVM) and convolutional neural networks (CNN). We find that many of the methods will be easily fast enough to analyse the anticipated data flow. In test data, several methods are able to identify upwards of half the lenses after applying some thresholds on the lens characteristics such as lensed image brightness, size or contrast with the lens galaxy without making a single false-positive identification. This is significantly better than direct inspection by humans was able to do. (abridged)
We present cosmological parameter constraints from a tomographic weak gravitational lensing analysis of ~450deg$^2$ of imaging data from the Kilo Degree Survey (KiDS). For a flat $Lambda$CDM cosmology with a prior on $H_0$ that encompasses the most recent direct measurements, we find $S_8equivsigma_8sqrt{Omega_{rm m}/0.3}=0.745pm0.039$. This result is in good agreement with other low redshift probes of large scale structure, including recent cosmic shear results, along with pre-Planck cosmic microwave background constraints. A $2.3$-$sigma$ tension in $S_8$ and `substantial discordance in the full parameter space is found with respect to the Planck 2015 results. We use shear measurements for nearly 15 million galaxies, determined with a new improved `self-calibrating version of $lens$fit validated using an extensive suite of image simulations. Four-band $ugri$ photometric redshifts are calibrated directly with deep spectroscopic surveys. The redshift calibration is confirmed using two independent techniques based on angular cross-correlations and the properties of the photometric redshift probability distributions. Our covariance matrix is determined using an analytical approach, verified numerically with large mock galaxy catalogues. We account for uncertainties in the modelling of intrinsic galaxy alignments and the impact of baryon feedback on the shape of the non-linear matter power spectrum, in addition to the small residual uncertainties in the shear and redshift calibration. The cosmology analysis was performed blind. Our high-level data products, including shear correlation functions, covariance matrices, redshift distributions, and Monte Carlo Markov Chains are available at http://kids.strw.leidenuniv.nl.