Do you want to publish a course? Click here

Time Delay Lens Modelling Challenge

161   0   0.0 ( 0 )
 Added by Xuheng Ding
 Publication date 2020
  fields Physics
and research's language is English




Ask ChatGPT about the research

In recent years, breakthroughs in methods and data have enabled gravitational time delays to emerge as a very powerful tool to measure the Hubble constant $H_0$. However, published state-of-the-art analyses require of order 1 year of expert investigator time and up to a million hours of computing time per system. Furthermore, as precision improves, it is crucial to identify and mitigate systematic uncertainties. With this time delay lens modelling challenge we aim to assess the level of precision and accuracy of the modelling techniques that are currently fast enough to handle of order 50 lenses, via the blind analysis of simulated datasets. The results in Rung 1 and Rung 2 show that methods that use only the point source positions tend to have lower precision ($10 - 20%$) while remaining accurate. In Rung 2, the methods that exploit the full information of the imaging and kinematic datasets can recover $H_0$ within the target accuracy ($ |A| < 2%$) and precision ($< 6%$ per system), even in the presence of poorly known point spread function and complex source morphology. A post-unblinding analysis of Rung 3 showed the numerical precision of the ray-traced cosmological simulations to be insufficient to test lens modelling methodology at the percent level, making the results difficult to interpret. A new challenge with improved simulations is needed to make further progress in the investigation of systematic uncertainties. For completeness, we present the Rung 3 results in an appendix, and use them to discuss various approaches to mitigating against similar subtle data generation effects in future blind challenges.

rate research

Read More

Strong gravitational lenses with measured time delay are a powerful tool to measure cosmological parameters, especially the Hubble constant ($H_0$). Recent studies show that by combining just three multiply-imaged AGN systems, one can determine $H_0$ to 2.4% precision. Furthermore, the number of time-delay lens systems is growing rapidly, enabling the determination of $H_0$ to 1% precision in the near future. However, as the precision increases it is important to ensure that systematic errors and biases remain subdominant. For this purpose, challenges with simulated datasets are a key component in this process. Following the experience of the past challenge on time delay, where it was shown that time delays can indeed be measured precisely and accurately at the sub-percent level, we now present the Time Delay Lens Modeling Challenge (TDLMC). The goal of this challenge is to assess the present capabilities of lens modeling codes and assumptions and test the level of accuracy of inferred cosmological parameters given realistic mock datasets. We invite scientists to model a set of simulated HST observations of 50 mock lens systems. The systems are organized in rungs, with the complexity and realism increasing going up the ladder. The goal of the challenge is to infer $H_0$ for each rung, given the HST images, the time delay, and stellar velocity dispersion of the deflector for a fixed background cosmology. The TDLMC challenge starts with the mock data release on 2018 January 8th. The deadline for blind submission is different for each rung. The deadline for Rung 0-1 is 2018 September 8; the deadline for Rung 2 is 2019 April 8 and the one for Rung 3 is 2019 September 8. This first paper gives an overview of the challenge including the data design, and a set of metrics to quantify the modeling performance and challenge details.
Strongly lensed explosive transients such as supernovae, gamma-ray bursts, fast radio bursts, and gravitational waves are very promising tools to determine the Hubble constant ($H_0$) in the near future in addition to strongly lensed quasars. In this work, we show that the transient nature of the point source provides an advantage over quasars: the lensed host galaxy can be observed before or after the transients appearance. Therefore, the lens model can be derived from images free of contamination from bright point sources. We quantify this advantage by comparing the precision of a lens model obtained from the same lenses with and without point sources. Based on Hubble Space Telescope (HST) Wide Field Camera 3 (WFC3) observations with the same sets of lensing parameters, we simulate realistic mock datasets of 48 quasar lensing systems (i.e., adding AGN in the galaxy center) and 48 galaxy-galaxy lensing systems (assuming the transient source is not visible but the time delay and image positions have been or will be measured). We then model the images and compare the inferences of the lens model parameters and $H_0$. We find that the precision of the lens models (in terms of the deflector mass slope) is better by a factor of 4.1 for the sample without lensed point sources, resulting in an increase of $H_0$ precision by a factor of 2.9. The opportunity to observe the lens systems without the transient point sources provides an additional advantage for time-delay cosmography over lensed quasars. It facilitates the determination of higher signal-to-noise stellar kinematics of the main deflector, and thus its mass density profile, which in turn plays a key role in breaking the mass-sheet degeneracy and constraining $H_0$.
Large scale imaging surveys will increase the number of galaxy-scale strong lensing candidates by maybe three orders of magnitudes beyond the number known today. Finding these rare objects will require picking them out of at least tens of millions of images and deriving scientific results from them will require quantifying the efficiency and bias of any search method. To achieve these objectives automated methods must be developed. Because gravitational lenses are rare objects reducing false positives will be particularly important. We present a description and results of an open gravitational lens finding challenge. Participants were asked to classify 100,000 candidate objects as to whether they were gravitational lenses or not with the goal of developing better automated methods for finding lenses in large data sets. A variety of methods were used including visual inspection, arc and ring finders, support vector machines (SVM) and convolutional neural networks (CNN). We find that many of the methods will be easily fast enough to analyse the anticipated data flow. In test data, several methods are able to identify upwards of half the lenses after applying some thresholds on the lens characteristics such as lensed image brightness, size or contrast with the lens galaxy without making a single false-positive identification. This is significantly better than direct inspection by humans was able to do. (abridged)
132 - Tao Yang , Simon Birrer , Bin Hu 2020
Strong gravitational lensing has been a powerful probe of cosmological models and gravity. To date, constraints in either domain have been obtained separately. We propose a new methodology through which the cosmological model, specifically the Hubble constant, and post-Newtonian parameter can be simultaneously constrained. Using the time-delay cosmography from strong lensing combined with the stellar kinematics of the deflector lens, we demonstrate the Hubble constant and post-Newtonian parameter are incorporated in two distance ratios which reflect the lensing mass and dynamical mass, respectively. Through the reanalysis of the four publicly released lenses distance posteriors from the H0LiCOW collaboration, the simultaneous constraints of Hubble constant and post-Newtonian parameter are obtained. Our results suggests no deviation from the General Relativity, $gamma_{texttt{PPN}}=0.87^{+0.19}_{-0.17}$ with a Hubble constant favors the local Universe value, $H_0=73.65^{+1.95}_{-2.26}$ km s$^{-1}$ Mpc$^{-1}$. Finally, we forecast the robustness of gravity tests by using the time-delay strong lensing for constraints we expect in the next few years. We find that the joint constraint from 40 lenses are able to reach the order of $7.7%$ for the post-Newtonian parameter and $1.4%$ for Hubble constant.
Strongly lensed quasars can provide measurements of the Hubble constant ($H_{0}$) independent of any other methods. One of the key ingredients is exquisite high-resolution imaging data, such as Hubble Space Telescope (HST) imaging and adaptive-optics (AO) imaging from ground-based telescopes, which provide strong constraints on the mass distribution of the lensing galaxy. In this work, we expand on the previous analysis of three time-delay lenses with AO imaging (RXJ1131-1231, HE0435-1223, and PG1115+080), and perform a joint analysis of J0924+0219 by using AO imaging from the Keck Telescope, obtained as part of the SHARP (Strong lensing at High Angular Resolution Program) AO effort, with HST imaging to constrain the mass distribution of the lensing galaxy. Under the assumption of a flat $Lambda$CDM model with fixed $Omega_{rm m}=0.3$, we show that by marginalizing over two different kinds of mass models (power-law and composite models) and their transformed mass profiles via a mass-sheet transformation, we obtain $Delta t_{rm BA}hhat{sigma}_{v}^{-2}=6.89substack{+0.8-0.7}$ days, $Delta t_{rm CA}hhat{sigma}_{v}^{-2}=10.7substack{+1.6-1.2}$ days, and $Delta t_{rm DA}hhat{sigma}_{v}^{-2}=7.70substack{+1.0-0.9}$ days, where $h=H_{0}/100~rm km,s^{-1},Mpc^{-1}$ is the dimensionless Hubble constant and $hat{sigma}_{v}=sigma^{rm ob}_{v}/(280~rm km,s^{-1})$ is the scaled dimensionless velocity dispersion. Future measurements of time delays with 10% uncertainty and velocity dispersion with 5% uncertainty would yield a $H_0$ constraint of $sim15$% precision.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا