Do you want to publish a course? Click here

The Strong Gravitational Lens Finding Challenge

125   0   0.0 ( 0 )
 Added by R. Benton Metcalf
 Publication date 2018
  fields Physics
and research's language is English




Ask ChatGPT about the research

Large scale imaging surveys will increase the number of galaxy-scale strong lensing candidates by maybe three orders of magnitudes beyond the number known today. Finding these rare objects will require picking them out of at least tens of millions of images and deriving scientific results from them will require quantifying the efficiency and bias of any search method. To achieve these objectives automated methods must be developed. Because gravitational lenses are rare objects reducing false positives will be particularly important. We present a description and results of an open gravitational lens finding challenge. Participants were asked to classify 100,000 candidate objects as to whether they were gravitational lenses or not with the goal of developing better automated methods for finding lenses in large data sets. A variety of methods were used including visual inspection, arc and ring finders, support vector machines (SVM) and convolutional neural networks (CNN). We find that many of the methods will be easily fast enough to analyse the anticipated data flow. In test data, several methods are able to identify upwards of half the lenses after applying some thresholds on the lens characteristics such as lensed image brightness, size or contrast with the lens galaxy without making a single false-positive identification. This is significantly better than direct inspection by humans was able to do. (abridged)



rate research

Read More

102 - X. Huang , M. Domingo , A. Pilon 2019
We perform a semi-automated search for strong gravitational lensing systems in the 9,000 deg$^2$ Dark Energy Camera Legacy Survey (DECaLS), part of the DESI Legacy Imaging Surveys (Dey et al.). The combination of the depth and breadth of these surveys are unparalleled at this time, making them particularly suitable for discovering new strong gravitational lensing systems. We adopt the deep residual neural network architecture (He et al.) developed by Lanusse et al. for the purpose of finding strong lenses in photometric surveys. We compile a training set that consists of known lensing systems in the Legacy Surveys and DES as well as non-lenses in the footprint of DECaLS. In this paper we show the results of applying our trained neural network to the cutout images centered on galaxies typed as ellipticals (Lang et al.) in DECaLS. The images that receive the highest scores (probabilities) are visually inspected and ranked. Here we present 335 candidate strong lensing systems, identified for the first time.
160 - X. Ding , T. Treu , S. Birrer 2020
In recent years, breakthroughs in methods and data have enabled gravitational time delays to emerge as a very powerful tool to measure the Hubble constant $H_0$. However, published state-of-the-art analyses require of order 1 year of expert investigator time and up to a million hours of computing time per system. Furthermore, as precision improves, it is crucial to identify and mitigate systematic uncertainties. With this time delay lens modelling challenge we aim to assess the level of precision and accuracy of the modelling techniques that are currently fast enough to handle of order 50 lenses, via the blind analysis of simulated datasets. The results in Rung 1 and Rung 2 show that methods that use only the point source positions tend to have lower precision ($10 - 20%$) while remaining accurate. In Rung 2, the methods that exploit the full information of the imaging and kinematic datasets can recover $H_0$ within the target accuracy ($ |A| < 2%$) and precision ($< 6%$ per system), even in the presence of poorly known point spread function and complex source morphology. A post-unblinding analysis of Rung 3 showed the numerical precision of the ray-traced cosmological simulations to be insufficient to test lens modelling methodology at the percent level, making the results difficult to interpret. A new challenge with improved simulations is needed to make further progress in the investigation of systematic uncertainties. For completeness, we present the Rung 3 results in an appendix, and use them to discuss various approaches to mitigating against similar subtle data generation effects in future blind challenges.
Context. Strong gravitationally lensed quasars are among the most interesting and useful observable extragalactic phenomena. Because their study constitutes a unique tool in various fields of astronomy, they are highly sought, not without difficulty. Indeed, even in this era of all-sky surveys, their recognition remains a great challenge, with barely a few hundred currently known systems. Aims. In this work we aim to detect new strongly lensed quasar candidates in the recently published Gaia Data Release 2 (DR2), which is the highest spatial resolution astrometric and photometric all-sky survey, attaining effective resolutions from 0.4 to 2.2. Methods. We cross-matched a merged list of quasars and candidates with the Gaia DR2 and found 1,839,143 counterparts within 0.5. We then searched matches with more than two Gaia DR2 counterparts within 6. We further narrowed the resulting list using astrometry and photometry compatibility criteria between the Gaia DR2 counterparts. A supervised machine learning method, Extremely Randomized Trees, is finally adopted to assign to each remaining system a probability of being lensed. Results. We report the discovery of three quadruply-imaged quasar candidates that are fully detected in Gaia DR2. These are the most promising new quasar lens candidates from Gaia DR2 and a simple singular isothermal ellipsoid lens model is able to reproduce their image positions to within $sim$1 mas. This letter demonstrates the gravitational lens discovery potential of Gaia.
We investigate how strong gravitational lensing can test contemporary models of massive elliptical (ME) galaxy formation, by combining a traditional decomposition of their visible stellar distribution with a lensing analysis of their mass distribution. As a proof of concept, we study a sample of three ME lenses, observing that all are composed of two distinct baryonic structures, a `red central bulge surrounded by an extended envelope of stellar material. Whilst these two components look photometrically similar, their distinct lensing effects permit a clean decomposition of their mass structure. This allows us to infer two key pieces of information about each lens galaxy: (i) the stellar mass distribution (without invoking stellar populations models) and (ii) the inner dark matter halo mass. We argue that these two measurements are crucial to testing models of ME formation, as the stellar mass profile provides a diagnostic of baryonic accretion and feedback whilst the dark matter mass places each galaxy in the context of LCDM large scale structure formation. We also detect large rotational offsets between the two stellar components and a lopsidedness in their outer mass distributions, which hold further information on the evolution of each ME. Finally, we discuss how this approach can be extended to galaxies of all Hubble types and what implication our results have for studies of strong gravitational lensing.
Einstein Telescope (ET) is conceived to be a third generation gravitational-wave observatory. Its amplitude sensitivity would be a factor ten better than advanced LIGO and Virgo and it could also extend the low-frequency sensitivity down to 1--3 Hz, compared to the 10--20 Hz of advanced detectors. Such an observatory will have the potential to observe a variety of different GW sources, including compact binary systems at cosmological distances. ETs expected reach for binary neutron star (BNS) coalescences is out to redshift $zsimeq 2$ and the rate of detectable BNS coalescences could be as high as one every few tens or hundreds of seconds, each lasting up to several days. %in the sensitive frequency band of ET. With such a signal-rich environment, a key question in data analysis is whether overlapping signals can be discriminated. In this paper we simulate the GW signals from a cosmological population of BNS and ask the following questions: Does this population create a confusion background that limits ETs ability to detect foreground sources? How efficient are current algorithms in discriminating overlapping BNS signals? Is it possible to discern the presence of a population of signals in the data by cross-correlating data from different detectors in the ET observatory? We find that algorithms currently used to analyze LIGO and Virgo data are already powerful enough to detect the sources expected in ET, but new algorithms are required to fully exploit ET data.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا