No Arabic abstract
When light from a distant source object, like a galaxy or a supernova, travels towards us, it is deflected by massive objects that lie on its path. When the mass density of the deflecting object exceeds a certain threshold, multiple, highly distorted images of the source are observed. This strong gravitational lensing effect has so far been treated as a model-fitting problem. Using the observed multiple images as constraints yields a self-consistent model of the deflecting mass density and the source object. As several models meet the constraints equally well, we develop a lens characterisation that separates data-based information from model assumptions. The observed multiple images allow us to determine local properties of the deflecting mass distribution on any mass scale from one simple set of equations. Their solution is unique and free of model-dependent degeneracies. The reconstruction of source objects can be performed completely model-independently, enabling us to study galaxy evolution without a lens-model bias. Our approach reduces the lens and source description to its data-based evidence that all models agree upon, simplifies an automated treatment of large datasets, and allows for an extrapolation to a global description resembling model-based descriptions.
We determine the cosmic expansion rate from supernovae of type Ia to set up a data-based distance measure that does not make assumptions about the constituents of the universe, i.e. about a specific parametrisation of a Friedmann cosmological model. The scale, determined by the Hubble constant $H_0$, is the only free cosmological parameter left in the gravitational lensing formalism. We investigate to which accuracy and precision the lensing distance ratio $D$ is determined from the Pantheon sample. Inserting $D$ and its uncertainty into the lensing equations for given $H_0$, esp. the time-delay equation between a pair of multiple images, allows to determine lens properties, esp. differences in the lensing potential ($Delta phi$), without specifying a cosmological model. We expand the luminosity distances into an analytic orthonormal basis, determine the maximum-likelihood weights for the basis functions by a globally optimal $chi^2$-parameter estimation, and derive confidence bounds by Monte-Carlo simulations. For typical strong lensing configurations between $z=0.5$ and $z=1.0$, $Delta phi$ can be determined with a relative imprecision of 1.7%, assuming imprecisions of the time delay and the redshift of the lens on the order of 1%. With only a small, tolerable loss in precision, the model-independent lens characterisation developed in this paper series can be generalised by dropping the specific Friedmann model to determine $D$ in favour of a data-based distance ratio. Moreover, for any astrophysical application, the approach presented here, provides distance measures for $zle2.3$ that are valid in any homogeneous, isotropic universe with general relativity as theory of gravity.
Discovery of strongly-lensed gravitational wave (GW) sources will unveil binary compact objects at higher redshifts and lower intrinsic luminosities than is possible without lensing. Such systems will yield unprecedented constraints on the mass distribution in galaxy clusters, measurements of the polarization of GWs, tests of General Relativity, and constraints on the Hubble parameter. Excited by these prospects, and intrigued by the presence of so-called heavy black holes in the early detections by LIGO-Virgo, we commenced a search for strongly-lensed GWs and possible electromagnetic counterparts in the latter stages of the second LIGO observing run (O2). Here, we summarise our calculation of the detection rate of strongly-lensed GWs, describe our review of BBH detections from O1, outline our observing strategy in O2, summarize our follow-up observations of GW170814, and discuss the future prospects of detection.
Applying the distance sum rule in strong gravitational lensing (SGL) and type Ia supernova (SN Ia) observations, one can provide an interesting cosmological model-independent method to determine the cosmic curvature parameter $Omega_k$. In this paper, with the newly compiled data sets including 161 galactic-scale SGL systems and 1048 SN Ia data, we place constraints on $Omega_k$ within the framework of three types of lens models extensively used in SGL studies. Moreover, to investigate the effect of different mass lens samples on the results, we divide the SGL sample into three sub-samples based on the center velocity dispersion of intervening galaxies. In the singular isothermal sphere (SIS) and extended power-law lens models, a flat universe is supported with the uncertainty about 0.2, while a closed universe is preferred in the power-law lens model. We find that the choice of lens models and the classification of SGL data actually can influence the constraints on $Omega_k$ significantly.
Recently, some divergent conclusions about cosmic acceleration were obtained using type Ia supernovae (SNe Ia), with opposite assumptions on the intrinsic luminosity evolution. In this paper, we use strong gravitational lensing systems to probe the cosmic acceleration. Since the theory of strong gravitational lensing is established certainly, and the Einstein radius is determined by stable cosmic geometry. We study two cosmological models, $Lambda$CDM and power-law models, through 152 strong gravitational lensing systems, incorporating with 30 Hubble parameters $H(z)$ and 11 baryon acoustic oscillation (BAO) measurements. Bayesian evidence are introduced to make a one-on-one comparison between cosmological models. Basing on Bayes factors $ln B$ of flat $Lambda$CDM versus power-law and $R_{h}=ct$ models are $ln B>5$, we find that the flat $Lambda$CDM is strongly supported by the combination of the datasets. Namely, an accelerating cosmology with non power-law expansion is preferred by our numeration.
We give a physical interpretation of the formalism intrinsic degeneracies of the gravitational lensing formalism that we derived on a mathematical basis in part IV of this series. We find that all degeneracies occur due to the partition of the mass density along the line of sight. Usually, it is partitioned into a background (cosmic) density and a foreground deflecting object. The latter can be further partitioned into a main deflecting object and perturbers. Weak deflecting objects along the line of sight are also added, either to the deflecting object or as a correction of the angular diameter distances, perturbing the cosmological background density. A priori, this is an arbitrary choice of reference frame and partition. They can be redefined without changing the lensing observables which are sensitive to the integrated deflecting mass density along the entire line of sight. Reformulating the time delay equation such that this interpretation of the degeneracies becomes easily visible, we note that the source can be eliminated from this formulation, which simplifies reconstructions of the deflecting mass distribution or the inference of the Hubble constant, $H_0$. Subsequently, we list necessary conditions to break the formalism intrinsic degeneracies and discuss ways to break them by model choices or including non-lensing observables like velocity dispersions along the line of sight with their advantages and disadvantages. We conclude with a systematic summary of all formalism intrinsic degeneracies and possibilities to break them.